I0218 15:50:35.325029 9 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0218 15:50:35.325763 9 e2e.go:109] Starting e2e run "b868a767-b5fa-4ab2-9d3c-da88877e1d20" on Ginkgo node 1 {"msg":"Test Suite starting","total":280,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1582041033 - Will randomize all specs Will run 280 of 4845 specs Feb 18 15:50:35.400: INFO: >>> kubeConfig: /root/.kube/config Feb 18 15:50:35.403: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 18 15:50:35.421: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 18 15:50:35.444: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 18 15:50:35.444: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 18 15:50:35.444: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 18 15:50:35.451: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 18 15:50:35.451: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 18 15:50:35.451: INFO: e2e test version: v1.18.0-alpha.2.152+426b3538900329 Feb 18 15:50:35.452: INFO: kube-apiserver version: v1.17.0 Feb 18 15:50:35.452: INFO: >>> kubeConfig: /root/.kube/config Feb 18 15:50:35.456: INFO: Cluster IP family: ipv4 [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:50:35.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset Feb 18 15:50:35.539: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6789 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating statefulset ss in namespace statefulset-6789 Feb 18 15:50:35.567: INFO: Found 0 stateful pods, waiting for 1 Feb 18 15:50:45.576: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 18 15:50:45.647: INFO: Deleting all statefulset in ns statefulset-6789 Feb 18 15:50:45.655: INFO: Scaling statefulset ss to 0 Feb 18 15:51:05.754: INFO: Waiting for statefulset status.replicas updated to 0 Feb 18 15:51:05.758: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:51:05.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6789" for this suite. • [SLOW TEST:30.359 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":280,"completed":1,"skipped":0,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:51:05.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 18 15:51:05.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ec44d63-b965-478d-999f-b926ac154519" in namespace "downward-api-6393" to be "success or failure" Feb 18 15:51:05.943: INFO: Pod "downwardapi-volume-8ec44d63-b965-478d-999f-b926ac154519": Phase="Pending", Reason="", readiness=false. Elapsed: 20.379027ms Feb 18 15:51:07.952: INFO: Pod "downwardapi-volume-8ec44d63-b965-478d-999f-b926ac154519": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029161937s Feb 18 15:51:09.958: INFO: Pod "downwardapi-volume-8ec44d63-b965-478d-999f-b926ac154519": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035595726s Feb 18 15:51:11.969: INFO: Pod "downwardapi-volume-8ec44d63-b965-478d-999f-b926ac154519": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046138298s Feb 18 15:51:13.975: INFO: Pod "downwardapi-volume-8ec44d63-b965-478d-999f-b926ac154519": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05201924s STEP: Saw pod success Feb 18 15:51:13.975: INFO: Pod "downwardapi-volume-8ec44d63-b965-478d-999f-b926ac154519" satisfied condition "success or failure" Feb 18 15:51:13.977: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8ec44d63-b965-478d-999f-b926ac154519 container client-container: STEP: delete the pod Feb 18 15:51:14.072: INFO: Waiting for pod downwardapi-volume-8ec44d63-b965-478d-999f-b926ac154519 to disappear Feb 18 15:51:14.078: INFO: Pod downwardapi-volume-8ec44d63-b965-478d-999f-b926ac154519 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:51:14.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6393" for this suite. • [SLOW TEST:8.275 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":2,"skipped":2,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:51:14.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8149.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8149.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8149.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8149.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 18 15:51:28.291: INFO: DNS probes using dns-test-a7c11e4c-68e3-462e-bfe3-3e352ed134ce succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8149.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8149.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8149.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8149.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 18 15:51:40.462: INFO: File wheezy_udp@dns-test-service-3.dns-8149.svc.cluster.local from pod dns-8149/dns-test-2a5ab779-6e64-4f6f-ab8b-0db6b616f904 contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 18 15:51:40.475: INFO: File jessie_udp@dns-test-service-3.dns-8149.svc.cluster.local from pod dns-8149/dns-test-2a5ab779-6e64-4f6f-ab8b-0db6b616f904 contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 18 15:51:40.475: INFO: Lookups using dns-8149/dns-test-2a5ab779-6e64-4f6f-ab8b-0db6b616f904 failed for: [wheezy_udp@dns-test-service-3.dns-8149.svc.cluster.local jessie_udp@dns-test-service-3.dns-8149.svc.cluster.local] Feb 18 15:51:45.491: INFO: File wheezy_udp@dns-test-service-3.dns-8149.svc.cluster.local from pod dns-8149/dns-test-2a5ab779-6e64-4f6f-ab8b-0db6b616f904 contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 18 15:51:45.497: INFO: File jessie_udp@dns-test-service-3.dns-8149.svc.cluster.local from pod dns-8149/dns-test-2a5ab779-6e64-4f6f-ab8b-0db6b616f904 contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 18 15:51:45.497: INFO: Lookups using dns-8149/dns-test-2a5ab779-6e64-4f6f-ab8b-0db6b616f904 failed for: [wheezy_udp@dns-test-service-3.dns-8149.svc.cluster.local jessie_udp@dns-test-service-3.dns-8149.svc.cluster.local] Feb 18 15:51:50.490: INFO: File wheezy_udp@dns-test-service-3.dns-8149.svc.cluster.local from pod dns-8149/dns-test-2a5ab779-6e64-4f6f-ab8b-0db6b616f904 contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 18 15:51:50.498: INFO: File jessie_udp@dns-test-service-3.dns-8149.svc.cluster.local from pod dns-8149/dns-test-2a5ab779-6e64-4f6f-ab8b-0db6b616f904 contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 18 15:51:50.498: INFO: Lookups using dns-8149/dns-test-2a5ab779-6e64-4f6f-ab8b-0db6b616f904 failed for: [wheezy_udp@dns-test-service-3.dns-8149.svc.cluster.local jessie_udp@dns-test-service-3.dns-8149.svc.cluster.local] Feb 18 15:51:55.499: INFO: DNS probes using dns-test-2a5ab779-6e64-4f6f-ab8b-0db6b616f904 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8149.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8149.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8149.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8149.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 18 15:52:07.738: INFO: DNS probes using dns-test-a790bc5e-4b32-46fc-8794-4759308141bb succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:52:07.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8149" for this suite. • [SLOW TEST:53.822 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":280,"completed":3,"skipped":4,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:52:07.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-configmap-g448 STEP: Creating a pod to test atomic-volume-subpath Feb 18 15:52:08.084: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-g448" in namespace "subpath-8705" to be "success or failure" Feb 18 15:52:08.198: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Pending", Reason="", readiness=false. Elapsed: 114.100953ms Feb 18 15:52:10.203: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119255623s Feb 18 15:52:12.210: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12629049s Feb 18 15:52:14.226: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141524607s Feb 18 15:52:16.232: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Running", Reason="", readiness=true. Elapsed: 8.14803942s Feb 18 15:52:18.240: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Running", Reason="", readiness=true. Elapsed: 10.155688281s Feb 18 15:52:20.247: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Running", Reason="", readiness=true. Elapsed: 12.163180565s Feb 18 15:52:22.282: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Running", Reason="", readiness=true. Elapsed: 14.197724014s Feb 18 15:52:24.287: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Running", Reason="", readiness=true. Elapsed: 16.203343184s Feb 18 15:52:26.302: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Running", Reason="", readiness=true. Elapsed: 18.21810318s Feb 18 15:52:28.321: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Running", Reason="", readiness=true. Elapsed: 20.237387893s Feb 18 15:52:30.331: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Running", Reason="", readiness=true. Elapsed: 22.247008658s Feb 18 15:52:32.398: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Running", Reason="", readiness=true. Elapsed: 24.31419797s Feb 18 15:52:34.415: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Running", Reason="", readiness=true. Elapsed: 26.330929179s Feb 18 15:52:36.422: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Running", Reason="", readiness=true. Elapsed: 28.337739364s Feb 18 15:52:38.430: INFO: Pod "pod-subpath-test-configmap-g448": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.346131473s STEP: Saw pod success Feb 18 15:52:38.431: INFO: Pod "pod-subpath-test-configmap-g448" satisfied condition "success or failure" Feb 18 15:52:38.435: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-g448 container test-container-subpath-configmap-g448: STEP: delete the pod Feb 18 15:52:38.600: INFO: Waiting for pod pod-subpath-test-configmap-g448 to disappear Feb 18 15:52:38.627: INFO: Pod pod-subpath-test-configmap-g448 no longer exists STEP: Deleting pod pod-subpath-test-configmap-g448 Feb 18 15:52:38.627: INFO: Deleting pod "pod-subpath-test-configmap-g448" in namespace "subpath-8705" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:52:38.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8705" for this suite. • [SLOW TEST:30.739 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":280,"completed":4,"skipped":11,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:52:38.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 18 15:52:38.758: INFO: Waiting up to 5m0s for pod "pod-6b9bd1a0-a763-4e05-96d6-75fb55ecc7ae" in namespace "emptydir-1335" to be "success or failure" Feb 18 15:52:38.774: INFO: Pod "pod-6b9bd1a0-a763-4e05-96d6-75fb55ecc7ae": Phase="Pending", Reason="", readiness=false. Elapsed: 15.734704ms Feb 18 15:52:40.786: INFO: Pod "pod-6b9bd1a0-a763-4e05-96d6-75fb55ecc7ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027928371s Feb 18 15:52:42.793: INFO: Pod "pod-6b9bd1a0-a763-4e05-96d6-75fb55ecc7ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034483983s Feb 18 15:52:44.816: INFO: Pod "pod-6b9bd1a0-a763-4e05-96d6-75fb55ecc7ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057959817s Feb 18 15:52:46.838: INFO: Pod "pod-6b9bd1a0-a763-4e05-96d6-75fb55ecc7ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079931779s STEP: Saw pod success Feb 18 15:52:46.838: INFO: Pod "pod-6b9bd1a0-a763-4e05-96d6-75fb55ecc7ae" satisfied condition "success or failure" Feb 18 15:52:46.843: INFO: Trying to get logs from node jerma-node pod pod-6b9bd1a0-a763-4e05-96d6-75fb55ecc7ae container test-container: STEP: delete the pod Feb 18 15:52:46.903: INFO: Waiting for pod pod-6b9bd1a0-a763-4e05-96d6-75fb55ecc7ae to disappear Feb 18 15:52:46.913: INFO: Pod pod-6b9bd1a0-a763-4e05-96d6-75fb55ecc7ae no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:52:46.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1335" for this suite. • [SLOW TEST:8.272 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":5,"skipped":19,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:52:46.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:52:58.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5642" for this suite. • [SLOW TEST:11.210 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":280,"completed":6,"skipped":27,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:52:58.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 18 15:52:58.959: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 18 15:53:00.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717637978, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717637978, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717637979, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717637978, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 15:53:02.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717637978, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717637978, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717637979, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717637978, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 15:53:04.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717637978, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717637978, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717637979, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717637978, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 15:53:08.006: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:53:08.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7880" for this suite. STEP: Destroying namespace "webhook-7880-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.371 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":280,"completed":7,"skipped":36,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:53:08.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3969 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Feb 18 15:53:08.923: INFO: Found 0 stateful pods, waiting for 3 Feb 18 15:53:18.947: INFO: Found 2 stateful pods, waiting for 3 Feb 18 15:53:28.936: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 18 15:53:28.936: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 18 15:53:28.936: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 18 15:53:38.933: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 18 15:53:38.933: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 18 15:53:38.933: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Feb 18 15:53:38.967: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 18 15:53:49.031: INFO: Updating stateful set ss2 Feb 18 15:53:49.042: INFO: Waiting for Pod statefulset-3969/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Feb 18 15:53:59.284: INFO: Found 2 stateful pods, waiting for 3 Feb 18 15:54:10.533: INFO: Found 2 stateful pods, waiting for 3 Feb 18 15:54:19.300: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 18 15:54:19.300: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 18 15:54:19.300: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 18 15:54:29.296: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 18 15:54:29.296: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 18 15:54:29.296: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 18 15:54:29.326: INFO: Updating stateful set ss2 Feb 18 15:54:29.523: INFO: Waiting for Pod statefulset-3969/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 18 15:54:39.541: INFO: Waiting for Pod statefulset-3969/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 18 15:54:49.656: INFO: Updating stateful set ss2 Feb 18 15:54:50.093: INFO: Waiting for StatefulSet statefulset-3969/ss2 to complete update Feb 18 15:54:50.093: INFO: Waiting for Pod statefulset-3969/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 18 15:55:00.103: INFO: Waiting for StatefulSet statefulset-3969/ss2 to complete update Feb 18 15:55:00.103: INFO: Waiting for Pod statefulset-3969/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 18 15:55:10.103: INFO: Waiting for StatefulSet statefulset-3969/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 18 15:55:20.112: INFO: Deleting all statefulset in ns statefulset-3969 Feb 18 15:55:20.119: INFO: Scaling statefulset ss2 to 0 Feb 18 15:56:00.179: INFO: Waiting for statefulset status.replicas updated to 0 Feb 18 15:56:00.184: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:56:00.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3969" for this suite. • [SLOW TEST:172.279 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":280,"completed":8,"skipped":51,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:56:00.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 18 15:56:00.976: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:56:12.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4414" for this suite. • [SLOW TEST:11.851 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":280,"completed":9,"skipped":56,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:56:12.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 18 15:56:12.764: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d713ce21-1ea3-460d-ab7a-1c11331ae716" in namespace "projected-7332" to be "success or failure" Feb 18 15:56:12.799: INFO: Pod "downwardapi-volume-d713ce21-1ea3-460d-ab7a-1c11331ae716": Phase="Pending", Reason="", readiness=false. Elapsed: 34.667801ms Feb 18 15:56:14.809: INFO: Pod "downwardapi-volume-d713ce21-1ea3-460d-ab7a-1c11331ae716": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044797668s Feb 18 15:56:16.819: INFO: Pod "downwardapi-volume-d713ce21-1ea3-460d-ab7a-1c11331ae716": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054576115s Feb 18 15:56:18.829: INFO: Pod "downwardapi-volume-d713ce21-1ea3-460d-ab7a-1c11331ae716": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064180274s Feb 18 15:56:20.838: INFO: Pod "downwardapi-volume-d713ce21-1ea3-460d-ab7a-1c11331ae716": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073122789s STEP: Saw pod success Feb 18 15:56:20.838: INFO: Pod "downwardapi-volume-d713ce21-1ea3-460d-ab7a-1c11331ae716" satisfied condition "success or failure" Feb 18 15:56:20.843: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d713ce21-1ea3-460d-ab7a-1c11331ae716 container client-container: STEP: delete the pod Feb 18 15:56:21.465: INFO: Waiting for pod downwardapi-volume-d713ce21-1ea3-460d-ab7a-1c11331ae716 to disappear Feb 18 15:56:21.474: INFO: Pod downwardapi-volume-d713ce21-1ea3-460d-ab7a-1c11331ae716 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:56:21.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7332" for this suite. • [SLOW TEST:8.848 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":10,"skipped":59,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:56:21.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-3889/configmap-test-a3700b7f-0cf3-426b-ad31-c084dd7317f0 STEP: Creating a pod to test consume configMaps Feb 18 15:56:21.644: INFO: Waiting up to 5m0s for pod "pod-configmaps-79892a9a-9c27-4284-9e6d-42a8eafb8cf8" in namespace "configmap-3889" to be "success or failure" Feb 18 15:56:21.661: INFO: Pod "pod-configmaps-79892a9a-9c27-4284-9e6d-42a8eafb8cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.011486ms Feb 18 15:56:23.671: INFO: Pod "pod-configmaps-79892a9a-9c27-4284-9e6d-42a8eafb8cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026313913s Feb 18 15:56:25.678: INFO: Pod "pod-configmaps-79892a9a-9c27-4284-9e6d-42a8eafb8cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03336839s Feb 18 15:56:27.686: INFO: Pod "pod-configmaps-79892a9a-9c27-4284-9e6d-42a8eafb8cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041864853s Feb 18 15:56:29.694: INFO: Pod "pod-configmaps-79892a9a-9c27-4284-9e6d-42a8eafb8cf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049141209s STEP: Saw pod success Feb 18 15:56:29.694: INFO: Pod "pod-configmaps-79892a9a-9c27-4284-9e6d-42a8eafb8cf8" satisfied condition "success or failure" Feb 18 15:56:29.699: INFO: Trying to get logs from node jerma-node pod pod-configmaps-79892a9a-9c27-4284-9e6d-42a8eafb8cf8 container env-test: STEP: delete the pod Feb 18 15:56:29.743: INFO: Waiting for pod pod-configmaps-79892a9a-9c27-4284-9e6d-42a8eafb8cf8 to disappear Feb 18 15:56:29.756: INFO: Pod pod-configmaps-79892a9a-9c27-4284-9e6d-42a8eafb8cf8 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:56:29.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3889" for this suite. • [SLOW TEST:8.323 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":280,"completed":11,"skipped":65,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:56:29.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 18 15:56:30.736: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 18 15:56:32.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638190, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638190, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638190, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638190, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 15:56:34.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638190, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638190, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638190, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638190, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 15:56:36.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638190, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638190, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638190, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638190, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 15:56:39.804: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:56:40.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9632" for this suite. STEP: Destroying namespace "webhook-9632-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.379 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":280,"completed":12,"skipped":70,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:56:40.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test env composition Feb 18 15:56:40.325: INFO: Waiting up to 5m0s for pod "var-expansion-3beb89af-2dc7-4761-b6ff-529ed6f1b295" in namespace "var-expansion-2171" to be "success or failure" Feb 18 15:56:40.330: INFO: Pod "var-expansion-3beb89af-2dc7-4761-b6ff-529ed6f1b295": Phase="Pending", Reason="", readiness=false. Elapsed: 4.72135ms Feb 18 15:56:42.821: INFO: Pod "var-expansion-3beb89af-2dc7-4761-b6ff-529ed6f1b295": Phase="Pending", Reason="", readiness=false. Elapsed: 2.495279232s Feb 18 15:56:44.832: INFO: Pod "var-expansion-3beb89af-2dc7-4761-b6ff-529ed6f1b295": Phase="Pending", Reason="", readiness=false. Elapsed: 4.506522133s Feb 18 15:56:46.844: INFO: Pod "var-expansion-3beb89af-2dc7-4761-b6ff-529ed6f1b295": Phase="Pending", Reason="", readiness=false. Elapsed: 6.518626337s Feb 18 15:56:48.851: INFO: Pod "var-expansion-3beb89af-2dc7-4761-b6ff-529ed6f1b295": Phase="Pending", Reason="", readiness=false. Elapsed: 8.52544439s Feb 18 15:56:50.860: INFO: Pod "var-expansion-3beb89af-2dc7-4761-b6ff-529ed6f1b295": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.534430092s STEP: Saw pod success Feb 18 15:56:50.860: INFO: Pod "var-expansion-3beb89af-2dc7-4761-b6ff-529ed6f1b295" satisfied condition "success or failure" Feb 18 15:56:50.867: INFO: Trying to get logs from node jerma-node pod var-expansion-3beb89af-2dc7-4761-b6ff-529ed6f1b295 container dapi-container: STEP: delete the pod Feb 18 15:56:51.022: INFO: Waiting for pod var-expansion-3beb89af-2dc7-4761-b6ff-529ed6f1b295 to disappear Feb 18 15:56:51.026: INFO: Pod var-expansion-3beb89af-2dc7-4761-b6ff-529ed6f1b295 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:56:51.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2171" for this suite. • [SLOW TEST:10.887 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":280,"completed":13,"skipped":74,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:56:51.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Feb 18 15:56:51.534: INFO: >>> kubeConfig: /root/.kube/config Feb 18 15:56:54.148: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:57:06.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3388" for this suite. • [SLOW TEST:15.339 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":280,"completed":14,"skipped":79,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:57:06.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service multi-endpoint-test in namespace services-4423 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4423 to expose endpoints map[] Feb 18 15:57:06.705: INFO: successfully validated that service multi-endpoint-test in namespace services-4423 exposes endpoints map[] (32.147215ms elapsed) STEP: Creating pod pod1 in namespace services-4423 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4423 to expose endpoints map[pod1:[100]] Feb 18 15:57:11.033: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.312036287s elapsed, will retry) Feb 18 15:57:15.092: INFO: successfully validated that service multi-endpoint-test in namespace services-4423 exposes endpoints map[pod1:[100]] (8.370359103s elapsed) STEP: Creating pod pod2 in namespace services-4423 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4423 to expose endpoints map[pod1:[100] pod2:[101]] Feb 18 15:57:19.682: INFO: Unexpected endpoints: found map[2a1a9ee9-4716-487c-86a8-3f99f3ea3a3d:[100]], expected map[pod1:[100] pod2:[101]] (4.585931688s elapsed, will retry) Feb 18 15:57:22.958: INFO: successfully validated that service multi-endpoint-test in namespace services-4423 exposes endpoints map[pod1:[100] pod2:[101]] (7.862039123s elapsed) STEP: Deleting pod pod1 in namespace services-4423 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4423 to expose endpoints map[pod2:[101]] Feb 18 15:57:24.023: INFO: successfully validated that service multi-endpoint-test in namespace services-4423 exposes endpoints map[pod2:[101]] (1.040122451s elapsed) STEP: Deleting pod pod2 in namespace services-4423 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4423 to expose endpoints map[] Feb 18 15:57:26.084: INFO: successfully validated that service multi-endpoint-test in namespace services-4423 exposes endpoints map[] (2.04558116s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:57:26.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4423" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.139 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":280,"completed":15,"skipped":88,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:57:26.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Feb 18 15:57:26.692: INFO: Created pod &Pod{ObjectMeta:{dns-3016 dns-3016 /api/v1/namespaces/dns-3016/pods/dns-3016 9ab6d533-899e-4fff-8427-ecc9e8e6e9e8 9202140 0 2020-02-18 15:57:26 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j8htx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j8htx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j8htx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 15:57:26.709: INFO: The status of Pod dns-3016 is Pending, waiting for it to be Running (with Ready = true) Feb 18 15:57:28.975: INFO: The status of Pod dns-3016 is Pending, waiting for it to be Running (with Ready = true) Feb 18 15:57:30.718: INFO: The status of Pod dns-3016 is Pending, waiting for it to be Running (with Ready = true) Feb 18 15:57:32.717: INFO: The status of Pod dns-3016 is Pending, waiting for it to be Running (with Ready = true) Feb 18 15:57:34.721: INFO: The status of Pod dns-3016 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Feb 18 15:57:34.721: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3016 PodName:dns-3016 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 15:57:34.721: INFO: >>> kubeConfig: /root/.kube/config I0218 15:57:34.787355 9 log.go:172] (0xc00391c210) (0xc0028eeaa0) Create stream I0218 15:57:34.787661 9 log.go:172] (0xc00391c210) (0xc0028eeaa0) Stream added, broadcasting: 1 I0218 15:57:34.792389 9 log.go:172] (0xc00391c210) Reply frame received for 1 I0218 15:57:34.792468 9 log.go:172] (0xc00391c210) (0xc002acd4a0) Create stream I0218 15:57:34.792493 9 log.go:172] (0xc00391c210) (0xc002acd4a0) Stream added, broadcasting: 3 I0218 15:57:34.793749 9 log.go:172] (0xc00391c210) Reply frame received for 3 I0218 15:57:34.793813 9 log.go:172] (0xc00391c210) (0xc002959040) Create stream I0218 15:57:34.793820 9 log.go:172] (0xc00391c210) (0xc002959040) Stream added, broadcasting: 5 I0218 15:57:34.794825 9 log.go:172] (0xc00391c210) Reply frame received for 5 I0218 15:57:34.921892 9 log.go:172] (0xc00391c210) Data frame received for 3 I0218 15:57:34.921976 9 log.go:172] (0xc002acd4a0) (3) Data frame handling I0218 15:57:34.921995 9 log.go:172] (0xc002acd4a0) (3) Data frame sent I0218 15:57:34.985075 9 log.go:172] (0xc00391c210) (0xc002acd4a0) Stream removed, broadcasting: 3 I0218 15:57:34.985220 9 log.go:172] (0xc00391c210) Data frame received for 1 I0218 15:57:34.985247 9 log.go:172] (0xc00391c210) (0xc002959040) Stream removed, broadcasting: 5 I0218 15:57:34.985278 9 log.go:172] (0xc0028eeaa0) (1) Data frame handling I0218 15:57:34.985304 9 log.go:172] (0xc0028eeaa0) (1) Data frame sent I0218 15:57:34.985318 9 log.go:172] (0xc00391c210) (0xc0028eeaa0) Stream removed, broadcasting: 1 I0218 15:57:34.985324 9 log.go:172] (0xc00391c210) Go away received I0218 15:57:34.985803 9 log.go:172] (0xc00391c210) (0xc0028eeaa0) Stream removed, broadcasting: 1 I0218 15:57:34.985831 9 log.go:172] (0xc00391c210) (0xc002acd4a0) Stream removed, broadcasting: 3 I0218 15:57:34.985854 9 log.go:172] (0xc00391c210) (0xc002959040) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Feb 18 15:57:34.985: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3016 PodName:dns-3016 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 15:57:34.986: INFO: >>> kubeConfig: /root/.kube/config I0218 15:57:35.035555 9 log.go:172] (0xc00391c840) (0xc0028ef040) Create stream I0218 15:57:35.035863 9 log.go:172] (0xc00391c840) (0xc0028ef040) Stream added, broadcasting: 1 I0218 15:57:35.040542 9 log.go:172] (0xc00391c840) Reply frame received for 1 I0218 15:57:35.040589 9 log.go:172] (0xc00391c840) (0xc002acd540) Create stream I0218 15:57:35.040607 9 log.go:172] (0xc00391c840) (0xc002acd540) Stream added, broadcasting: 3 I0218 15:57:35.042318 9 log.go:172] (0xc00391c840) Reply frame received for 3 I0218 15:57:35.042354 9 log.go:172] (0xc00391c840) (0xc0029590e0) Create stream I0218 15:57:35.042365 9 log.go:172] (0xc00391c840) (0xc0029590e0) Stream added, broadcasting: 5 I0218 15:57:35.043846 9 log.go:172] (0xc00391c840) Reply frame received for 5 I0218 15:57:35.119213 9 log.go:172] (0xc00391c840) Data frame received for 3 I0218 15:57:35.119662 9 log.go:172] (0xc002acd540) (3) Data frame handling I0218 15:57:35.119715 9 log.go:172] (0xc002acd540) (3) Data frame sent I0218 15:57:35.185386 9 log.go:172] (0xc00391c840) Data frame received for 1 I0218 15:57:35.185487 9 log.go:172] (0xc00391c840) (0xc002acd540) Stream removed, broadcasting: 3 I0218 15:57:35.185623 9 log.go:172] (0xc00391c840) (0xc0029590e0) Stream removed, broadcasting: 5 I0218 15:57:35.185735 9 log.go:172] (0xc0028ef040) (1) Data frame handling I0218 15:57:35.185767 9 log.go:172] (0xc0028ef040) (1) Data frame sent I0218 15:57:35.185789 9 log.go:172] (0xc00391c840) (0xc0028ef040) Stream removed, broadcasting: 1 I0218 15:57:35.185800 9 log.go:172] (0xc00391c840) Go away received I0218 15:57:35.186048 9 log.go:172] (0xc00391c840) (0xc0028ef040) Stream removed, broadcasting: 1 I0218 15:57:35.186063 9 log.go:172] (0xc00391c840) (0xc002acd540) Stream removed, broadcasting: 3 I0218 15:57:35.186069 9 log.go:172] (0xc00391c840) (0xc0029590e0) Stream removed, broadcasting: 5 Feb 18 15:57:35.186: INFO: Deleting pod dns-3016... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:57:35.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3016" for this suite. • [SLOW TEST:8.658 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":280,"completed":16,"skipped":102,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:57:35.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 18 15:57:35.350: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 18 15:57:35.375: INFO: Waiting for terminating namespaces to be deleted... Feb 18 15:57:35.378: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 18 15:57:35.385: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 18 15:57:35.385: INFO: Container kube-proxy ready: true, restart count 0 Feb 18 15:57:35.385: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 18 15:57:35.385: INFO: Container weave ready: true, restart count 1 Feb 18 15:57:35.385: INFO: Container weave-npc ready: true, restart count 0 Feb 18 15:57:35.385: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 18 15:57:35.416: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 18 15:57:35.416: INFO: Container coredns ready: true, restart count 0 Feb 18 15:57:35.416: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 18 15:57:35.416: INFO: Container coredns ready: true, restart count 0 Feb 18 15:57:35.416: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 18 15:57:35.416: INFO: Container kube-controller-manager ready: true, restart count 11 Feb 18 15:57:35.416: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 18 15:57:35.416: INFO: Container kube-proxy ready: true, restart count 0 Feb 18 15:57:35.416: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 18 15:57:35.416: INFO: Container weave ready: true, restart count 0 Feb 18 15:57:35.416: INFO: Container weave-npc ready: true, restart count 0 Feb 18 15:57:35.416: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 18 15:57:35.416: INFO: Container kube-scheduler ready: true, restart count 15 Feb 18 15:57:35.416: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 18 15:57:35.416: INFO: Container kube-apiserver ready: true, restart count 1 Feb 18 15:57:35.416: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 18 15:57:35.416: INFO: Container etcd ready: true, restart count 1 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f48a525c64a549], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f48a525da3b8ee], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:57:36.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9509" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":280,"completed":17,"skipped":110,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:57:36.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 18 15:57:36.716: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8ffa441-7097-46f7-8445-05220df10ee2" in namespace "downward-api-6479" to be "success or failure" Feb 18 15:57:36.730: INFO: Pod "downwardapi-volume-a8ffa441-7097-46f7-8445-05220df10ee2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.17152ms Feb 18 15:57:38.740: INFO: Pod "downwardapi-volume-a8ffa441-7097-46f7-8445-05220df10ee2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02380001s Feb 18 15:57:40.782: INFO: Pod "downwardapi-volume-a8ffa441-7097-46f7-8445-05220df10ee2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065659855s Feb 18 15:57:42.833: INFO: Pod "downwardapi-volume-a8ffa441-7097-46f7-8445-05220df10ee2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116556266s Feb 18 15:57:44.842: INFO: Pod "downwardapi-volume-a8ffa441-7097-46f7-8445-05220df10ee2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125413244s STEP: Saw pod success Feb 18 15:57:44.842: INFO: Pod "downwardapi-volume-a8ffa441-7097-46f7-8445-05220df10ee2" satisfied condition "success or failure" Feb 18 15:57:44.853: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a8ffa441-7097-46f7-8445-05220df10ee2 container client-container: STEP: delete the pod Feb 18 15:57:44.908: INFO: Waiting for pod downwardapi-volume-a8ffa441-7097-46f7-8445-05220df10ee2 to disappear Feb 18 15:57:44.982: INFO: Pod downwardapi-volume-a8ffa441-7097-46f7-8445-05220df10ee2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:57:44.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6479" for this suite. • [SLOW TEST:8.497 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":18,"skipped":135,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:57:44.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:57:45.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9480" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":280,"completed":19,"skipped":144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:57:45.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:57:51.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2289" for this suite. • [SLOW TEST:6.250 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":280,"completed":20,"skipped":188,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:57:51.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 15:57:51.844: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:58:00.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3676" for this suite. • [SLOW TEST:9.044 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":280,"completed":21,"skipped":192,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:58:00.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 18 15:58:00.915: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d19df75-25f7-4f67-a4e3-a0e6480b6206" in namespace "projected-1429" to be "success or failure" Feb 18 15:58:00.922: INFO: Pod "downwardapi-volume-5d19df75-25f7-4f67-a4e3-a0e6480b6206": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193177ms Feb 18 15:58:02.940: INFO: Pod "downwardapi-volume-5d19df75-25f7-4f67-a4e3-a0e6480b6206": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025023968s Feb 18 15:58:04.945: INFO: Pod "downwardapi-volume-5d19df75-25f7-4f67-a4e3-a0e6480b6206": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029196134s Feb 18 15:58:06.955: INFO: Pod "downwardapi-volume-5d19df75-25f7-4f67-a4e3-a0e6480b6206": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039941997s Feb 18 15:58:08.965: INFO: Pod "downwardapi-volume-5d19df75-25f7-4f67-a4e3-a0e6480b6206": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04908278s STEP: Saw pod success Feb 18 15:58:08.965: INFO: Pod "downwardapi-volume-5d19df75-25f7-4f67-a4e3-a0e6480b6206" satisfied condition "success or failure" Feb 18 15:58:08.970: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5d19df75-25f7-4f67-a4e3-a0e6480b6206 container client-container: STEP: delete the pod Feb 18 15:58:09.028: INFO: Waiting for pod downwardapi-volume-5d19df75-25f7-4f67-a4e3-a0e6480b6206 to disappear Feb 18 15:58:09.036: INFO: Pod downwardapi-volume-5d19df75-25f7-4f67-a4e3-a0e6480b6206 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:58:09.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1429" for this suite. • [SLOW TEST:8.312 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":22,"skipped":210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:58:09.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:58:17.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-399" for this suite. • [SLOW TEST:8.188 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":23,"skipped":236,"failed":0} SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:58:17.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-47f00c8f-0c2c-4b59-b8d8-f579b5525ce7 in namespace container-probe-655 Feb 18 15:58:25.525: INFO: Started pod liveness-47f00c8f-0c2c-4b59-b8d8-f579b5525ce7 in namespace container-probe-655 STEP: checking the pod's current state and verifying that restartCount is present Feb 18 15:58:25.533: INFO: Initial restart count of pod liveness-47f00c8f-0c2c-4b59-b8d8-f579b5525ce7 is 0 Feb 18 15:58:46.267: INFO: Restart count of pod container-probe-655/liveness-47f00c8f-0c2c-4b59-b8d8-f579b5525ce7 is now 1 (20.73336215s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 15:58:46.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-655" for this suite. • [SLOW TEST:29.110 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":24,"skipped":238,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 15:58:46.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-5f627634-952d-43f0-a7f0-d03a7726eed7 in namespace container-probe-3112 Feb 18 15:58:56.528: INFO: Started pod liveness-5f627634-952d-43f0-a7f0-d03a7726eed7 in namespace container-probe-3112 STEP: checking the pod's current state and verifying that restartCount is present Feb 18 15:58:56.544: INFO: Initial restart count of pod liveness-5f627634-952d-43f0-a7f0-d03a7726eed7 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:02:56.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3112" for this suite. • [SLOW TEST:250.484 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":280,"completed":25,"skipped":310,"failed":0} [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:02:56.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3410.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3410.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3410.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3410.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3410.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3410.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3410.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 80.61.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.61.80_udp@PTR;check="$$(dig +tcp +noall +answer +search 80.61.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.61.80_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3410.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3410.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3410.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3410.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3410.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3410.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3410.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3410.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 80.61.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.61.80_udp@PTR;check="$$(dig +tcp +noall +answer +search 80.61.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.61.80_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 18 16:03:09.259: INFO: Unable to read wheezy_udp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:09.264: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:09.268: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:09.273: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:09.303: INFO: Unable to read jessie_udp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:09.307: INFO: Unable to read jessie_tcp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:09.310: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:09.313: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:09.331: INFO: Lookups using dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e failed for: [wheezy_udp@dns-test-service.dns-3410.svc.cluster.local wheezy_tcp@dns-test-service.dns-3410.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local jessie_udp@dns-test-service.dns-3410.svc.cluster.local jessie_tcp@dns-test-service.dns-3410.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local] Feb 18 16:03:14.356: INFO: Unable to read wheezy_udp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:14.367: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:14.374: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:14.378: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:14.411: INFO: Unable to read jessie_udp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:14.421: INFO: Unable to read jessie_tcp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:14.436: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:14.445: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:14.495: INFO: Lookups using dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e failed for: [wheezy_udp@dns-test-service.dns-3410.svc.cluster.local wheezy_tcp@dns-test-service.dns-3410.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local jessie_udp@dns-test-service.dns-3410.svc.cluster.local jessie_tcp@dns-test-service.dns-3410.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local] Feb 18 16:03:19.344: INFO: Unable to read wheezy_udp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:19.367: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:19.373: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:19.377: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:19.406: INFO: Unable to read jessie_udp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:19.411: INFO: Unable to read jessie_tcp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:19.415: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:19.419: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:19.453: INFO: Lookups using dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e failed for: [wheezy_udp@dns-test-service.dns-3410.svc.cluster.local wheezy_tcp@dns-test-service.dns-3410.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local jessie_udp@dns-test-service.dns-3410.svc.cluster.local jessie_tcp@dns-test-service.dns-3410.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local] Feb 18 16:03:24.339: INFO: Unable to read wheezy_udp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:24.343: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:24.347: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:24.352: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:24.386: INFO: Unable to read jessie_udp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:24.391: INFO: Unable to read jessie_tcp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:24.394: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:24.409: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:24.429: INFO: Lookups using dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e failed for: [wheezy_udp@dns-test-service.dns-3410.svc.cluster.local wheezy_tcp@dns-test-service.dns-3410.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local jessie_udp@dns-test-service.dns-3410.svc.cluster.local jessie_tcp@dns-test-service.dns-3410.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local] Feb 18 16:03:29.340: INFO: Unable to read wheezy_udp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:29.345: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:29.350: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:29.354: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:29.414: INFO: Unable to read jessie_udp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:29.417: INFO: Unable to read jessie_tcp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:29.421: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:29.426: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:29.449: INFO: Lookups using dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e failed for: [wheezy_udp@dns-test-service.dns-3410.svc.cluster.local wheezy_tcp@dns-test-service.dns-3410.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local jessie_udp@dns-test-service.dns-3410.svc.cluster.local jessie_tcp@dns-test-service.dns-3410.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local] Feb 18 16:03:34.340: INFO: Unable to read wheezy_udp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:34.346: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:34.349: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:34.351: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:34.382: INFO: Unable to read jessie_udp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:34.385: INFO: Unable to read jessie_tcp@dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:34.387: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:34.391: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local from pod dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e: the server could not find the requested resource (get pods dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e) Feb 18 16:03:34.412: INFO: Lookups using dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e failed for: [wheezy_udp@dns-test-service.dns-3410.svc.cluster.local wheezy_tcp@dns-test-service.dns-3410.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local jessie_udp@dns-test-service.dns-3410.svc.cluster.local jessie_tcp@dns-test-service.dns-3410.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3410.svc.cluster.local] Feb 18 16:03:39.475: INFO: DNS probes using dns-3410/dns-test-651fab7b-f095-4423-a4b7-3f90e4606a2e succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:03:39.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3410" for this suite. • [SLOW TEST:43.163 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":280,"completed":26,"skipped":310,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:03:40.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 18 16:03:40.195: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7e15005-4637-46e2-ac9a-746f8be2062b" in namespace "downward-api-1043" to be "success or failure" Feb 18 16:03:40.205: INFO: Pod "downwardapi-volume-d7e15005-4637-46e2-ac9a-746f8be2062b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.521813ms Feb 18 16:03:42.210: INFO: Pod "downwardapi-volume-d7e15005-4637-46e2-ac9a-746f8be2062b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015316116s Feb 18 16:03:44.219: INFO: Pod "downwardapi-volume-d7e15005-4637-46e2-ac9a-746f8be2062b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023617839s Feb 18 16:03:46.226: INFO: Pod "downwardapi-volume-d7e15005-4637-46e2-ac9a-746f8be2062b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030482221s Feb 18 16:03:48.232: INFO: Pod "downwardapi-volume-d7e15005-4637-46e2-ac9a-746f8be2062b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037139529s STEP: Saw pod success Feb 18 16:03:48.232: INFO: Pod "downwardapi-volume-d7e15005-4637-46e2-ac9a-746f8be2062b" satisfied condition "success or failure" Feb 18 16:03:48.235: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d7e15005-4637-46e2-ac9a-746f8be2062b container client-container: STEP: delete the pod Feb 18 16:03:48.291: INFO: Waiting for pod downwardapi-volume-d7e15005-4637-46e2-ac9a-746f8be2062b to disappear Feb 18 16:03:48.314: INFO: Pod downwardapi-volume-d7e15005-4637-46e2-ac9a-746f8be2062b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:03:48.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1043" for this suite. • [SLOW TEST:8.369 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":27,"skipped":321,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:03:48.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Feb 18 16:03:48.981: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Feb 18 16:03:51.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638629, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638629, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638629, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638628, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:03:53.012: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638629, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638629, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638629, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638628, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:03:55.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638629, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638629, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638629, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638628, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 16:03:58.104: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:03:58.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:03:59.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1661" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:11.223 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":280,"completed":28,"skipped":322,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:03:59.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-61abceda-4cd5-46f8-8206-319057db7d35 STEP: Creating a pod to test consume configMaps Feb 18 16:03:59.918: INFO: Waiting up to 5m0s for pod "pod-configmaps-fe39afd8-6ee0-428c-accd-a3e1b311678e" in namespace "configmap-3363" to be "success or failure" Feb 18 16:03:59.964: INFO: Pod "pod-configmaps-fe39afd8-6ee0-428c-accd-a3e1b311678e": Phase="Pending", Reason="", readiness=false. Elapsed: 45.452594ms Feb 18 16:04:01.974: INFO: Pod "pod-configmaps-fe39afd8-6ee0-428c-accd-a3e1b311678e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054997333s Feb 18 16:04:04.006: INFO: Pod "pod-configmaps-fe39afd8-6ee0-428c-accd-a3e1b311678e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087442711s Feb 18 16:04:06.034: INFO: Pod "pod-configmaps-fe39afd8-6ee0-428c-accd-a3e1b311678e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114697929s Feb 18 16:04:08.107: INFO: Pod "pod-configmaps-fe39afd8-6ee0-428c-accd-a3e1b311678e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.188083792s STEP: Saw pod success Feb 18 16:04:08.107: INFO: Pod "pod-configmaps-fe39afd8-6ee0-428c-accd-a3e1b311678e" satisfied condition "success or failure" Feb 18 16:04:08.113: INFO: Trying to get logs from node jerma-node pod pod-configmaps-fe39afd8-6ee0-428c-accd-a3e1b311678e container configmap-volume-test: STEP: delete the pod Feb 18 16:04:08.461: INFO: Waiting for pod pod-configmaps-fe39afd8-6ee0-428c-accd-a3e1b311678e to disappear Feb 18 16:04:08.508: INFO: Pod pod-configmaps-fe39afd8-6ee0-428c-accd-a3e1b311678e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:04:08.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3363" for this suite. • [SLOW TEST:8.922 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":29,"skipped":332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:04:08.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:04:08.723: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 18 16:04:11.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4882 create -f -' Feb 18 16:04:14.554: INFO: stderr: "" Feb 18 16:04:14.554: INFO: stdout: "e2e-test-crd-publish-openapi-7680-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 18 16:04:14.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4882 delete e2e-test-crd-publish-openapi-7680-crds test-cr' Feb 18 16:04:14.675: INFO: stderr: "" Feb 18 16:04:14.675: INFO: stdout: "e2e-test-crd-publish-openapi-7680-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Feb 18 16:04:14.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4882 apply -f -' Feb 18 16:04:14.995: INFO: stderr: "" Feb 18 16:04:14.995: INFO: stdout: "e2e-test-crd-publish-openapi-7680-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 18 16:04:14.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4882 delete e2e-test-crd-publish-openapi-7680-crds test-cr' Feb 18 16:04:15.176: INFO: stderr: "" Feb 18 16:04:15.176: INFO: stdout: "e2e-test-crd-publish-openapi-7680-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Feb 18 16:04:15.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7680-crds' Feb 18 16:04:15.582: INFO: stderr: "" Feb 18 16:04:15.582: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7680-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:04:18.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4882" for this suite. • [SLOW TEST:10.180 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":280,"completed":30,"skipped":355,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:04:18.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1863 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 18 16:04:18.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3229' Feb 18 16:04:18.957: INFO: stderr: "" Feb 18 16:04:18.957: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1868 Feb 18 16:04:19.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3229' Feb 18 16:04:23.820: INFO: stderr: "" Feb 18 16:04:23.821: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:04:23.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3229" for this suite. • [SLOW TEST:5.145 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1859 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":280,"completed":31,"skipped":374,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:04:23.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-49edba91-bd3f-4307-974c-742dd57115f1 STEP: Creating a pod to test consume secrets Feb 18 16:04:23.991: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b93b02d4-17c8-4f1a-8cc6-3a97a71586e6" in namespace "projected-3189" to be "success or failure" Feb 18 16:04:24.008: INFO: Pod "pod-projected-secrets-b93b02d4-17c8-4f1a-8cc6-3a97a71586e6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.594403ms Feb 18 16:04:26.014: INFO: Pod "pod-projected-secrets-b93b02d4-17c8-4f1a-8cc6-3a97a71586e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023190337s Feb 18 16:04:28.027: INFO: Pod "pod-projected-secrets-b93b02d4-17c8-4f1a-8cc6-3a97a71586e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036058996s Feb 18 16:04:30.033: INFO: Pod "pod-projected-secrets-b93b02d4-17c8-4f1a-8cc6-3a97a71586e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042213996s STEP: Saw pod success Feb 18 16:04:30.033: INFO: Pod "pod-projected-secrets-b93b02d4-17c8-4f1a-8cc6-3a97a71586e6" satisfied condition "success or failure" Feb 18 16:04:30.036: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-b93b02d4-17c8-4f1a-8cc6-3a97a71586e6 container projected-secret-volume-test: STEP: delete the pod Feb 18 16:04:30.212: INFO: Waiting for pod pod-projected-secrets-b93b02d4-17c8-4f1a-8cc6-3a97a71586e6 to disappear Feb 18 16:04:30.241: INFO: Pod pod-projected-secrets-b93b02d4-17c8-4f1a-8cc6-3a97a71586e6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:04:30.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3189" for this suite. • [SLOW TEST:6.402 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":32,"skipped":399,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:04:30.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting the proxy server Feb 18 16:04:30.381: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:04:30.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3007" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":280,"completed":33,"skipped":407,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:04:30.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:04:37.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4942" for this suite. STEP: Destroying namespace "nsdeletetest-5146" for this suite. Feb 18 16:04:37.796: INFO: Namespace nsdeletetest-5146 was already deleted STEP: Destroying namespace "nsdeletetest-6766" for this suite. • [SLOW TEST:7.280 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":280,"completed":34,"skipped":412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:04:37.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 18 16:04:37.950: INFO: Waiting up to 5m0s for pod "downward-api-4376d1ec-539f-45d9-ad3d-8c170f4d3bc7" in namespace "downward-api-2986" to be "success or failure" Feb 18 16:04:37.975: INFO: Pod "downward-api-4376d1ec-539f-45d9-ad3d-8c170f4d3bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.241405ms Feb 18 16:04:39.986: INFO: Pod "downward-api-4376d1ec-539f-45d9-ad3d-8c170f4d3bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035384701s Feb 18 16:04:41.995: INFO: Pod "downward-api-4376d1ec-539f-45d9-ad3d-8c170f4d3bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04522189s Feb 18 16:04:44.003: INFO: Pod "downward-api-4376d1ec-539f-45d9-ad3d-8c170f4d3bc7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052971071s Feb 18 16:04:46.009: INFO: Pod "downward-api-4376d1ec-539f-45d9-ad3d-8c170f4d3bc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059172148s STEP: Saw pod success Feb 18 16:04:46.009: INFO: Pod "downward-api-4376d1ec-539f-45d9-ad3d-8c170f4d3bc7" satisfied condition "success or failure" Feb 18 16:04:46.012: INFO: Trying to get logs from node jerma-node pod downward-api-4376d1ec-539f-45d9-ad3d-8c170f4d3bc7 container dapi-container: STEP: delete the pod Feb 18 16:04:46.051: INFO: Waiting for pod downward-api-4376d1ec-539f-45d9-ad3d-8c170f4d3bc7 to disappear Feb 18 16:04:46.070: INFO: Pod downward-api-4376d1ec-539f-45d9-ad3d-8c170f4d3bc7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:04:46.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2986" for this suite. • [SLOW TEST:8.256 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":280,"completed":35,"skipped":441,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:04:46.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 18 16:04:46.270: INFO: Waiting up to 5m0s for pod "pod-d19dc5da-7da6-484e-9b45-2f3f1dad6d44" in namespace "emptydir-7561" to be "success or failure" Feb 18 16:04:46.401: INFO: Pod "pod-d19dc5da-7da6-484e-9b45-2f3f1dad6d44": Phase="Pending", Reason="", readiness=false. Elapsed: 131.376628ms Feb 18 16:04:49.033: INFO: Pod "pod-d19dc5da-7da6-484e-9b45-2f3f1dad6d44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.762706366s Feb 18 16:04:51.041: INFO: Pod "pod-d19dc5da-7da6-484e-9b45-2f3f1dad6d44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.771372858s Feb 18 16:04:53.048: INFO: Pod "pod-d19dc5da-7da6-484e-9b45-2f3f1dad6d44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.778011082s Feb 18 16:04:55.054: INFO: Pod "pod-d19dc5da-7da6-484e-9b45-2f3f1dad6d44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.7840706s STEP: Saw pod success Feb 18 16:04:55.054: INFO: Pod "pod-d19dc5da-7da6-484e-9b45-2f3f1dad6d44" satisfied condition "success or failure" Feb 18 16:04:55.057: INFO: Trying to get logs from node jerma-node pod pod-d19dc5da-7da6-484e-9b45-2f3f1dad6d44 container test-container: STEP: delete the pod Feb 18 16:04:55.231: INFO: Waiting for pod pod-d19dc5da-7da6-484e-9b45-2f3f1dad6d44 to disappear Feb 18 16:04:55.235: INFO: Pod pod-d19dc5da-7da6-484e-9b45-2f3f1dad6d44 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:04:55.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7561" for this suite. • [SLOW TEST:9.190 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":36,"skipped":442,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:04:55.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: executing a command with run --rm and attach with stdin Feb 18 16:04:55.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6706 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 18 16:05:02.058: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0218 16:05:00.582051 191 log.go:172] (0xc000bf2c60) (0xc000b78320) Create stream\nI0218 16:05:00.582292 191 log.go:172] (0xc000bf2c60) (0xc000b78320) Stream added, broadcasting: 1\nI0218 16:05:00.588656 191 log.go:172] (0xc000bf2c60) Reply frame received for 1\nI0218 16:05:00.588724 191 log.go:172] (0xc000bf2c60) (0xc000640000) Create stream\nI0218 16:05:00.588732 191 log.go:172] (0xc000bf2c60) (0xc000640000) Stream added, broadcasting: 3\nI0218 16:05:00.590356 191 log.go:172] (0xc000bf2c60) Reply frame received for 3\nI0218 16:05:00.590385 191 log.go:172] (0xc000bf2c60) (0xc000b783c0) Create stream\nI0218 16:05:00.590394 191 log.go:172] (0xc000bf2c60) (0xc000b783c0) Stream added, broadcasting: 5\nI0218 16:05:00.592141 191 log.go:172] (0xc000bf2c60) Reply frame received for 5\nI0218 16:05:00.592163 191 log.go:172] (0xc000bf2c60) (0xc0006400a0) Create stream\nI0218 16:05:00.592169 191 log.go:172] (0xc000bf2c60) (0xc0006400a0) Stream added, broadcasting: 7\nI0218 16:05:00.593462 191 log.go:172] (0xc000bf2c60) Reply frame received for 7\nI0218 16:05:00.593640 191 log.go:172] (0xc000640000) (3) Writing data frame\nI0218 16:05:00.593790 191 log.go:172] (0xc000640000) (3) Writing data frame\nI0218 16:05:00.598574 191 log.go:172] (0xc000bf2c60) Data frame received for 5\nI0218 16:05:00.598616 191 log.go:172] (0xc000b783c0) (5) Data frame handling\nI0218 16:05:00.598627 191 log.go:172] (0xc000b783c0) (5) Data frame sent\nI0218 16:05:00.603851 191 log.go:172] (0xc000bf2c60) Data frame received for 5\nI0218 16:05:00.603885 191 log.go:172] (0xc000b783c0) (5) Data frame handling\nI0218 16:05:00.603894 191 log.go:172] (0xc000b783c0) (5) Data frame sent\nI0218 16:05:01.721159 191 log.go:172] (0xc000bf2c60) (0xc000640000) Stream removed, broadcasting: 3\nI0218 16:05:01.721286 191 log.go:172] (0xc000bf2c60) Data frame received for 1\nI0218 16:05:01.721321 191 log.go:172] (0xc000b78320) (1) Data frame handling\nI0218 16:05:01.721367 191 log.go:172] (0xc000b78320) (1) Data frame sent\nI0218 16:05:01.721393 191 log.go:172] (0xc000bf2c60) (0xc000b78320) Stream removed, broadcasting: 1\nI0218 16:05:01.722129 191 log.go:172] (0xc000bf2c60) (0xc000b783c0) Stream removed, broadcasting: 5\nI0218 16:05:01.722390 191 log.go:172] (0xc000bf2c60) (0xc0006400a0) Stream removed, broadcasting: 7\nI0218 16:05:01.722422 191 log.go:172] (0xc000bf2c60) Go away received\nI0218 16:05:01.722475 191 log.go:172] (0xc000bf2c60) (0xc000b78320) Stream removed, broadcasting: 1\nI0218 16:05:01.722509 191 log.go:172] (0xc000bf2c60) (0xc000640000) Stream removed, broadcasting: 3\nI0218 16:05:01.722525 191 log.go:172] (0xc000bf2c60) (0xc000b783c0) Stream removed, broadcasting: 5\nI0218 16:05:01.722535 191 log.go:172] (0xc000bf2c60) (0xc0006400a0) Stream removed, broadcasting: 7\n" Feb 18 16:05:02.058: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:05:04.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6706" for this suite. • [SLOW TEST:8.860 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":280,"completed":37,"skipped":449,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:05:04.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Feb 18 16:05:18.281: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9070 PodName:pod-sharedvolume-1fd2f90c-d509-48e4-a9d9-10c415c2340b ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 16:05:18.281: INFO: >>> kubeConfig: /root/.kube/config I0218 16:05:18.348105 9 log.go:172] (0xc00526e9a0) (0xc0026d8dc0) Create stream I0218 16:05:18.348254 9 log.go:172] (0xc00526e9a0) (0xc0026d8dc0) Stream added, broadcasting: 1 I0218 16:05:18.351936 9 log.go:172] (0xc00526e9a0) Reply frame received for 1 I0218 16:05:18.351984 9 log.go:172] (0xc00526e9a0) (0xc0026d8e60) Create stream I0218 16:05:18.351994 9 log.go:172] (0xc00526e9a0) (0xc0026d8e60) Stream added, broadcasting: 3 I0218 16:05:18.353532 9 log.go:172] (0xc00526e9a0) Reply frame received for 3 I0218 16:05:18.353563 9 log.go:172] (0xc00526e9a0) (0xc002a6abe0) Create stream I0218 16:05:18.353575 9 log.go:172] (0xc00526e9a0) (0xc002a6abe0) Stream added, broadcasting: 5 I0218 16:05:18.355049 9 log.go:172] (0xc00526e9a0) Reply frame received for 5 I0218 16:05:18.440057 9 log.go:172] (0xc00526e9a0) Data frame received for 3 I0218 16:05:18.440379 9 log.go:172] (0xc0026d8e60) (3) Data frame handling I0218 16:05:18.440446 9 log.go:172] (0xc0026d8e60) (3) Data frame sent I0218 16:05:18.535736 9 log.go:172] (0xc00526e9a0) (0xc0026d8e60) Stream removed, broadcasting: 3 I0218 16:05:18.535958 9 log.go:172] (0xc00526e9a0) Data frame received for 1 I0218 16:05:18.535979 9 log.go:172] (0xc0026d8dc0) (1) Data frame handling I0218 16:05:18.535995 9 log.go:172] (0xc0026d8dc0) (1) Data frame sent I0218 16:05:18.536005 9 log.go:172] (0xc00526e9a0) (0xc0026d8dc0) Stream removed, broadcasting: 1 I0218 16:05:18.536017 9 log.go:172] (0xc00526e9a0) (0xc002a6abe0) Stream removed, broadcasting: 5 I0218 16:05:18.536112 9 log.go:172] (0xc00526e9a0) Go away received I0218 16:05:18.536598 9 log.go:172] (0xc00526e9a0) (0xc0026d8dc0) Stream removed, broadcasting: 1 I0218 16:05:18.536612 9 log.go:172] (0xc00526e9a0) (0xc0026d8e60) Stream removed, broadcasting: 3 I0218 16:05:18.536618 9 log.go:172] (0xc00526e9a0) (0xc002a6abe0) Stream removed, broadcasting: 5 Feb 18 16:05:18.536: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:05:18.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9070" for this suite. • [SLOW TEST:14.425 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":280,"completed":38,"skipped":458,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:05:18.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 18 16:05:18.700: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7fc6a7f2-9978-44b0-a7a8-5619805e1503" in namespace "projected-3916" to be "success or failure" Feb 18 16:05:18.711: INFO: Pod "downwardapi-volume-7fc6a7f2-9978-44b0-a7a8-5619805e1503": Phase="Pending", Reason="", readiness=false. Elapsed: 10.57884ms Feb 18 16:05:20.717: INFO: Pod "downwardapi-volume-7fc6a7f2-9978-44b0-a7a8-5619805e1503": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017002312s Feb 18 16:05:22.724: INFO: Pod "downwardapi-volume-7fc6a7f2-9978-44b0-a7a8-5619805e1503": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024320304s Feb 18 16:05:24.759: INFO: Pod "downwardapi-volume-7fc6a7f2-9978-44b0-a7a8-5619805e1503": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059169821s Feb 18 16:05:26.767: INFO: Pod "downwardapi-volume-7fc6a7f2-9978-44b0-a7a8-5619805e1503": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067012917s STEP: Saw pod success Feb 18 16:05:26.767: INFO: Pod "downwardapi-volume-7fc6a7f2-9978-44b0-a7a8-5619805e1503" satisfied condition "success or failure" Feb 18 16:05:26.771: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7fc6a7f2-9978-44b0-a7a8-5619805e1503 container client-container: STEP: delete the pod Feb 18 16:05:27.021: INFO: Waiting for pod downwardapi-volume-7fc6a7f2-9978-44b0-a7a8-5619805e1503 to disappear Feb 18 16:05:27.025: INFO: Pod downwardapi-volume-7fc6a7f2-9978-44b0-a7a8-5619805e1503 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:05:27.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3916" for this suite. • [SLOW TEST:8.476 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":39,"skipped":459,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:05:27.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 18 16:05:27.207: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88dab3da-8db8-442c-8e2c-b0d372d250d2" in namespace "downward-api-7643" to be "success or failure" Feb 18 16:05:27.226: INFO: Pod "downwardapi-volume-88dab3da-8db8-442c-8e2c-b0d372d250d2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.271253ms Feb 18 16:05:29.232: INFO: Pod "downwardapi-volume-88dab3da-8db8-442c-8e2c-b0d372d250d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024854931s Feb 18 16:05:31.240: INFO: Pod "downwardapi-volume-88dab3da-8db8-442c-8e2c-b0d372d250d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032697211s Feb 18 16:05:33.248: INFO: Pod "downwardapi-volume-88dab3da-8db8-442c-8e2c-b0d372d250d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040490912s Feb 18 16:05:35.254: INFO: Pod "downwardapi-volume-88dab3da-8db8-442c-8e2c-b0d372d250d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046965426s STEP: Saw pod success Feb 18 16:05:35.254: INFO: Pod "downwardapi-volume-88dab3da-8db8-442c-8e2c-b0d372d250d2" satisfied condition "success or failure" Feb 18 16:05:35.259: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-88dab3da-8db8-442c-8e2c-b0d372d250d2 container client-container: STEP: delete the pod Feb 18 16:05:35.330: INFO: Waiting for pod downwardapi-volume-88dab3da-8db8-442c-8e2c-b0d372d250d2 to disappear Feb 18 16:05:35.413: INFO: Pod downwardapi-volume-88dab3da-8db8-442c-8e2c-b0d372d250d2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:05:35.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7643" for this suite. • [SLOW TEST:8.399 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":40,"skipped":464,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:05:35.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0218 16:05:38.878611 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 18 16:05:38.878: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:05:38.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3776" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":280,"completed":41,"skipped":470,"failed":0} ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:05:39.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-projected-all-test-volume-13a508e0-6f47-4452-a246-55087734e3b7 STEP: Creating secret with name secret-projected-all-test-volume-6ff8ca80-cc14-4ed9-b4d0-7abbd92cb5aa STEP: Creating a pod to test Check all projections for projected volume plugin Feb 18 16:05:39.502: INFO: Waiting up to 5m0s for pod "projected-volume-e868fb4c-6ea4-4a9b-b9be-0e57c481ef88" in namespace "projected-1676" to be "success or failure" Feb 18 16:05:39.523: INFO: Pod "projected-volume-e868fb4c-6ea4-4a9b-b9be-0e57c481ef88": Phase="Pending", Reason="", readiness=false. Elapsed: 20.62239ms Feb 18 16:05:41.533: INFO: Pod "projected-volume-e868fb4c-6ea4-4a9b-b9be-0e57c481ef88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030836402s Feb 18 16:05:43.540: INFO: Pod "projected-volume-e868fb4c-6ea4-4a9b-b9be-0e57c481ef88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038138221s Feb 18 16:05:45.546: INFO: Pod "projected-volume-e868fb4c-6ea4-4a9b-b9be-0e57c481ef88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04429405s Feb 18 16:05:47.556: INFO: Pod "projected-volume-e868fb4c-6ea4-4a9b-b9be-0e57c481ef88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054359066s Feb 18 16:05:49.575: INFO: Pod "projected-volume-e868fb4c-6ea4-4a9b-b9be-0e57c481ef88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073334412s STEP: Saw pod success Feb 18 16:05:49.576: INFO: Pod "projected-volume-e868fb4c-6ea4-4a9b-b9be-0e57c481ef88" satisfied condition "success or failure" Feb 18 16:05:49.625: INFO: Trying to get logs from node jerma-node pod projected-volume-e868fb4c-6ea4-4a9b-b9be-0e57c481ef88 container projected-all-volume-test: STEP: delete the pod Feb 18 16:05:49.760: INFO: Waiting for pod projected-volume-e868fb4c-6ea4-4a9b-b9be-0e57c481ef88 to disappear Feb 18 16:05:49.773: INFO: Pod projected-volume-e868fb4c-6ea4-4a9b-b9be-0e57c481ef88 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:05:49.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1676" for this suite. • [SLOW TEST:10.743 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":280,"completed":42,"skipped":470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:05:49.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-8754316a-807a-43a3-8c64-be411202791f in namespace container-probe-8670 Feb 18 16:05:56.227: INFO: Started pod liveness-8754316a-807a-43a3-8c64-be411202791f in namespace container-probe-8670 STEP: checking the pod's current state and verifying that restartCount is present Feb 18 16:05:56.231: INFO: Initial restart count of pod liveness-8754316a-807a-43a3-8c64-be411202791f is 0 Feb 18 16:06:16.545: INFO: Restart count of pod container-probe-8670/liveness-8754316a-807a-43a3-8c64-be411202791f is now 1 (20.313313656s elapsed) Feb 18 16:06:36.661: INFO: Restart count of pod container-probe-8670/liveness-8754316a-807a-43a3-8c64-be411202791f is now 2 (40.429888542s elapsed) Feb 18 16:06:56.788: INFO: Restart count of pod container-probe-8670/liveness-8754316a-807a-43a3-8c64-be411202791f is now 3 (1m0.556738919s elapsed) Feb 18 16:07:16.930: INFO: Restart count of pod container-probe-8670/liveness-8754316a-807a-43a3-8c64-be411202791f is now 4 (1m20.698063092s elapsed) Feb 18 16:08:23.255: INFO: Restart count of pod container-probe-8670/liveness-8754316a-807a-43a3-8c64-be411202791f is now 5 (2m27.023257483s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:08:23.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8670" for this suite. • [SLOW TEST:154.187 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":280,"completed":43,"skipped":556,"failed":0} [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:08:23.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:08:34.393: INFO: Waiting up to 5m0s for pod "client-envvars-20bc8522-612b-450b-b376-fc6d5a8d7a5f" in namespace "pods-9334" to be "success or failure" Feb 18 16:08:34.404: INFO: Pod "client-envvars-20bc8522-612b-450b-b376-fc6d5a8d7a5f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.539931ms Feb 18 16:08:36.460: INFO: Pod "client-envvars-20bc8522-612b-450b-b376-fc6d5a8d7a5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066254379s Feb 18 16:08:38.477: INFO: Pod "client-envvars-20bc8522-612b-450b-b376-fc6d5a8d7a5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083051504s Feb 18 16:08:40.486: INFO: Pod "client-envvars-20bc8522-612b-450b-b376-fc6d5a8d7a5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092398553s Feb 18 16:08:42.494: INFO: Pod "client-envvars-20bc8522-612b-450b-b376-fc6d5a8d7a5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100789876s STEP: Saw pod success Feb 18 16:08:42.495: INFO: Pod "client-envvars-20bc8522-612b-450b-b376-fc6d5a8d7a5f" satisfied condition "success or failure" Feb 18 16:08:42.498: INFO: Trying to get logs from node jerma-node pod client-envvars-20bc8522-612b-450b-b376-fc6d5a8d7a5f container env3cont: STEP: delete the pod Feb 18 16:08:42.583: INFO: Waiting for pod client-envvars-20bc8522-612b-450b-b376-fc6d5a8d7a5f to disappear Feb 18 16:08:42.595: INFO: Pod client-envvars-20bc8522-612b-450b-b376-fc6d5a8d7a5f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:08:42.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9334" for this suite. • [SLOW TEST:18.614 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":280,"completed":44,"skipped":556,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:08:42.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 18 16:08:42.771: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ef87ad7-4a3a-4f94-a7ee-c475b2153bbe" in namespace "projected-2953" to be "success or failure" Feb 18 16:08:42.784: INFO: Pod "downwardapi-volume-6ef87ad7-4a3a-4f94-a7ee-c475b2153bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 12.453541ms Feb 18 16:08:44.796: INFO: Pod "downwardapi-volume-6ef87ad7-4a3a-4f94-a7ee-c475b2153bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024296717s Feb 18 16:08:46.803: INFO: Pod "downwardapi-volume-6ef87ad7-4a3a-4f94-a7ee-c475b2153bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032116686s Feb 18 16:08:48.812: INFO: Pod "downwardapi-volume-6ef87ad7-4a3a-4f94-a7ee-c475b2153bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041016777s Feb 18 16:08:50.825: INFO: Pod "downwardapi-volume-6ef87ad7-4a3a-4f94-a7ee-c475b2153bbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053413162s STEP: Saw pod success Feb 18 16:08:50.825: INFO: Pod "downwardapi-volume-6ef87ad7-4a3a-4f94-a7ee-c475b2153bbe" satisfied condition "success or failure" Feb 18 16:08:50.829: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6ef87ad7-4a3a-4f94-a7ee-c475b2153bbe container client-container: STEP: delete the pod Feb 18 16:08:50.891: INFO: Waiting for pod downwardapi-volume-6ef87ad7-4a3a-4f94-a7ee-c475b2153bbe to disappear Feb 18 16:08:50.902: INFO: Pod downwardapi-volume-6ef87ad7-4a3a-4f94-a7ee-c475b2153bbe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:08:50.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2953" for this suite. • [SLOW TEST:8.312 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":45,"skipped":557,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:08:50.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 18 16:08:51.700: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 18 16:08:53.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638931, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638931, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638931, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638931, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:08:55.733: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638931, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638931, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638931, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638931, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:08:57.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638931, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638931, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638931, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717638931, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 16:09:01.136: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:09:13.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8187" for this suite. STEP: Destroying namespace "webhook-8187-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:22.773 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":280,"completed":46,"skipped":561,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:09:13.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod busybox-d6b835b8-5e74-46a1-a365-6c7457c924ed in namespace container-probe-2057 Feb 18 16:09:21.836: INFO: Started pod busybox-d6b835b8-5e74-46a1-a365-6c7457c924ed in namespace container-probe-2057 STEP: checking the pod's current state and verifying that restartCount is present Feb 18 16:09:21.885: INFO: Initial restart count of pod busybox-d6b835b8-5e74-46a1-a365-6c7457c924ed is 0 Feb 18 16:10:10.338: INFO: Restart count of pod container-probe-2057/busybox-d6b835b8-5e74-46a1-a365-6c7457c924ed is now 1 (48.453444784s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:10:10.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2057" for this suite. • [SLOW TEST:57.328 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":47,"skipped":570,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:10:11.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Feb 18 16:10:19.840: INFO: Successfully updated pod "annotationupdate2514f6cf-2126-4b9b-b097-931a74ae3b39" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:10:21.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7909" for this suite. • [SLOW TEST:10.913 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":48,"skipped":620,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:10:21.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:10:22.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1795" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":280,"completed":49,"skipped":651,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:10:22.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:10:34.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5470" for this suite. • [SLOW TEST:11.959 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":280,"completed":50,"skipped":673,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:10:34.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 18 16:10:35.220: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 18 16:10:37.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717639035, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717639035, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717639035, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717639035, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:10:39.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717639035, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717639035, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717639035, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717639035, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:10:41.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717639035, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717639035, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717639035, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717639035, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 16:10:44.284: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Feb 18 16:10:52.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-750 to-be-attached-pod -i -c=container1' Feb 18 16:10:52.559: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:10:52.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-750" for this suite. STEP: Destroying namespace "webhook-750-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:18.546 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":280,"completed":51,"skipped":696,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:10:52.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 18 16:10:52.927: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c00570af-4bd4-41b3-a629-558aa4788e0b" in namespace "downward-api-9082" to be "success or failure" Feb 18 16:10:52.947: INFO: Pod "downwardapi-volume-c00570af-4bd4-41b3-a629-558aa4788e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.353579ms Feb 18 16:10:54.953: INFO: Pod "downwardapi-volume-c00570af-4bd4-41b3-a629-558aa4788e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02491563s Feb 18 16:10:56.960: INFO: Pod "downwardapi-volume-c00570af-4bd4-41b3-a629-558aa4788e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032514226s Feb 18 16:10:58.968: INFO: Pod "downwardapi-volume-c00570af-4bd4-41b3-a629-558aa4788e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040506598s Feb 18 16:11:01.582: INFO: Pod "downwardapi-volume-c00570af-4bd4-41b3-a629-558aa4788e0b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.65435391s Feb 18 16:11:03.590: INFO: Pod "downwardapi-volume-c00570af-4bd4-41b3-a629-558aa4788e0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.66190235s STEP: Saw pod success Feb 18 16:11:03.590: INFO: Pod "downwardapi-volume-c00570af-4bd4-41b3-a629-558aa4788e0b" satisfied condition "success or failure" Feb 18 16:11:03.593: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c00570af-4bd4-41b3-a629-558aa4788e0b container client-container: STEP: delete the pod Feb 18 16:11:03.645: INFO: Waiting for pod downwardapi-volume-c00570af-4bd4-41b3-a629-558aa4788e0b to disappear Feb 18 16:11:03.656: INFO: Pod downwardapi-volume-c00570af-4bd4-41b3-a629-558aa4788e0b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:11:03.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9082" for this suite. • [SLOW TEST:11.068 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":52,"skipped":719,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:11:03.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: validating cluster-info Feb 18 16:11:03.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 18 16:11:04.134: INFO: stderr: "" Feb 18 16:11:04.134: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:11:04.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7028" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":280,"completed":53,"skipped":724,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:11:04.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:11:04.354: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:11:05.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6902" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":280,"completed":54,"skipped":725,"failed":0} ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:11:05.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating pod Feb 18 16:11:11.595: INFO: Pod pod-hostip-144f41cf-e46d-4cd0-b3dd-1cb17cffa02c has hostIP: 10.96.2.250 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:11:11.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4364" for this suite. • [SLOW TEST:6.190 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":280,"completed":55,"skipped":725,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:11:11.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:11:11.780: INFO: Waiting up to 5m0s for pod "busybox-user-65534-76e1acb9-f1bd-47cd-993a-dd307a04b107" in namespace "security-context-test-1303" to be "success or failure" Feb 18 16:11:11.824: INFO: Pod "busybox-user-65534-76e1acb9-f1bd-47cd-993a-dd307a04b107": Phase="Pending", Reason="", readiness=false. Elapsed: 43.684022ms Feb 18 16:11:13.830: INFO: Pod "busybox-user-65534-76e1acb9-f1bd-47cd-993a-dd307a04b107": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050237731s Feb 18 16:11:15.839: INFO: Pod "busybox-user-65534-76e1acb9-f1bd-47cd-993a-dd307a04b107": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059388261s Feb 18 16:11:17.846: INFO: Pod "busybox-user-65534-76e1acb9-f1bd-47cd-993a-dd307a04b107": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066311618s Feb 18 16:11:19.914: INFO: Pod "busybox-user-65534-76e1acb9-f1bd-47cd-993a-dd307a04b107": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134227571s Feb 18 16:11:19.914: INFO: Pod "busybox-user-65534-76e1acb9-f1bd-47cd-993a-dd307a04b107" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:11:19.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1303" for this suite. • [SLOW TEST:8.310 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":56,"skipped":729,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:11:19.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 18 16:11:20.112: INFO: Waiting up to 5m0s for pod "downward-api-9aa0d65e-d610-4e92-ad44-e5ef7a5942fd" in namespace "downward-api-8562" to be "success or failure" Feb 18 16:11:20.200: INFO: Pod "downward-api-9aa0d65e-d610-4e92-ad44-e5ef7a5942fd": Phase="Pending", Reason="", readiness=false. Elapsed: 87.911335ms Feb 18 16:11:22.205: INFO: Pod "downward-api-9aa0d65e-d610-4e92-ad44-e5ef7a5942fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093312515s Feb 18 16:11:24.211: INFO: Pod "downward-api-9aa0d65e-d610-4e92-ad44-e5ef7a5942fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099654673s Feb 18 16:11:26.219: INFO: Pod "downward-api-9aa0d65e-d610-4e92-ad44-e5ef7a5942fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107101566s Feb 18 16:11:28.225: INFO: Pod "downward-api-9aa0d65e-d610-4e92-ad44-e5ef7a5942fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113318477s STEP: Saw pod success Feb 18 16:11:28.225: INFO: Pod "downward-api-9aa0d65e-d610-4e92-ad44-e5ef7a5942fd" satisfied condition "success or failure" Feb 18 16:11:28.228: INFO: Trying to get logs from node jerma-node pod downward-api-9aa0d65e-d610-4e92-ad44-e5ef7a5942fd container dapi-container: STEP: delete the pod Feb 18 16:11:28.281: INFO: Waiting for pod downward-api-9aa0d65e-d610-4e92-ad44-e5ef7a5942fd to disappear Feb 18 16:11:28.287: INFO: Pod downward-api-9aa0d65e-d610-4e92-ad44-e5ef7a5942fd no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:11:28.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8562" for this suite. • [SLOW TEST:8.372 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":280,"completed":57,"skipped":758,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:11:28.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 18 16:11:28.486: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ae78359-c7c3-427c-8bfc-e9c6faa7eba1" in namespace "projected-3463" to be "success or failure" Feb 18 16:11:28.501: INFO: Pod "downwardapi-volume-3ae78359-c7c3-427c-8bfc-e9c6faa7eba1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.4277ms Feb 18 16:11:30.512: INFO: Pod "downwardapi-volume-3ae78359-c7c3-427c-8bfc-e9c6faa7eba1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026033652s Feb 18 16:11:32.520: INFO: Pod "downwardapi-volume-3ae78359-c7c3-427c-8bfc-e9c6faa7eba1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034517409s Feb 18 16:11:34.533: INFO: Pod "downwardapi-volume-3ae78359-c7c3-427c-8bfc-e9c6faa7eba1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047231542s Feb 18 16:11:36.541: INFO: Pod "downwardapi-volume-3ae78359-c7c3-427c-8bfc-e9c6faa7eba1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055490124s STEP: Saw pod success Feb 18 16:11:36.542: INFO: Pod "downwardapi-volume-3ae78359-c7c3-427c-8bfc-e9c6faa7eba1" satisfied condition "success or failure" Feb 18 16:11:36.544: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-3ae78359-c7c3-427c-8bfc-e9c6faa7eba1 container client-container: STEP: delete the pod Feb 18 16:11:36.628: INFO: Waiting for pod downwardapi-volume-3ae78359-c7c3-427c-8bfc-e9c6faa7eba1 to disappear Feb 18 16:11:36.633: INFO: Pod downwardapi-volume-3ae78359-c7c3-427c-8bfc-e9c6faa7eba1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:11:36.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3463" for this suite. • [SLOW TEST:8.345 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":58,"skipped":780,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:11:36.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a replication controller Feb 18 16:11:36.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5426' Feb 18 16:11:37.203: INFO: stderr: "" Feb 18 16:11:37.203: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 18 16:11:37.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5426' Feb 18 16:11:37.396: INFO: stderr: "" Feb 18 16:11:37.396: INFO: stdout: "update-demo-nautilus-79bps update-demo-nautilus-tnnsp " Feb 18 16:11:37.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79bps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5426' Feb 18 16:11:37.604: INFO: stderr: "" Feb 18 16:11:37.604: INFO: stdout: "" Feb 18 16:11:37.604: INFO: update-demo-nautilus-79bps is created but not running Feb 18 16:11:42.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5426' Feb 18 16:11:43.829: INFO: stderr: "" Feb 18 16:11:43.829: INFO: stdout: "update-demo-nautilus-79bps update-demo-nautilus-tnnsp " Feb 18 16:11:43.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79bps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5426' Feb 18 16:11:44.760: INFO: stderr: "" Feb 18 16:11:44.760: INFO: stdout: "" Feb 18 16:11:44.760: INFO: update-demo-nautilus-79bps is created but not running Feb 18 16:11:49.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5426' Feb 18 16:11:49.929: INFO: stderr: "" Feb 18 16:11:49.929: INFO: stdout: "update-demo-nautilus-79bps update-demo-nautilus-tnnsp " Feb 18 16:11:49.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79bps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5426' Feb 18 16:11:50.077: INFO: stderr: "" Feb 18 16:11:50.077: INFO: stdout: "true" Feb 18 16:11:50.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79bps -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5426' Feb 18 16:11:50.157: INFO: stderr: "" Feb 18 16:11:50.157: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 18 16:11:50.157: INFO: validating pod update-demo-nautilus-79bps Feb 18 16:11:50.201: INFO: got data: { "image": "nautilus.jpg" } Feb 18 16:11:50.201: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 18 16:11:50.201: INFO: update-demo-nautilus-79bps is verified up and running Feb 18 16:11:50.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tnnsp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5426' Feb 18 16:11:50.282: INFO: stderr: "" Feb 18 16:11:50.283: INFO: stdout: "true" Feb 18 16:11:50.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tnnsp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5426' Feb 18 16:11:50.433: INFO: stderr: "" Feb 18 16:11:50.433: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 18 16:11:50.433: INFO: validating pod update-demo-nautilus-tnnsp Feb 18 16:11:50.444: INFO: got data: { "image": "nautilus.jpg" } Feb 18 16:11:50.444: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 18 16:11:50.444: INFO: update-demo-nautilus-tnnsp is verified up and running STEP: using delete to clean up resources Feb 18 16:11:50.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5426' Feb 18 16:11:50.689: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 18 16:11:50.690: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 18 16:11:50.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5426' Feb 18 16:11:50.794: INFO: stderr: "No resources found in kubectl-5426 namespace.\n" Feb 18 16:11:50.794: INFO: stdout: "" Feb 18 16:11:50.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5426 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 18 16:11:50.964: INFO: stderr: "" Feb 18 16:11:50.964: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:11:50.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5426" for this suite. • [SLOW TEST:14.341 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":280,"completed":59,"skipped":785,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:11:50.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:11:51.118: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 18 16:11:52.815: INFO: Number of nodes with available pods: 0 Feb 18 16:11:52.815: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 18 16:11:53.226: INFO: Number of nodes with available pods: 0 Feb 18 16:11:53.226: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:11:54.234: INFO: Number of nodes with available pods: 0 Feb 18 16:11:54.234: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:11:56.492: INFO: Number of nodes with available pods: 0 Feb 18 16:11:56.493: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:11:57.307: INFO: Number of nodes with available pods: 0 Feb 18 16:11:57.307: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:11:58.241: INFO: Number of nodes with available pods: 0 Feb 18 16:11:58.242: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:11:59.245: INFO: Number of nodes with available pods: 0 Feb 18 16:11:59.245: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:00.441: INFO: Number of nodes with available pods: 0 Feb 18 16:12:00.441: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:01.258: INFO: Number of nodes with available pods: 0 Feb 18 16:12:01.258: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:02.281: INFO: Number of nodes with available pods: 1 Feb 18 16:12:02.282: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 18 16:12:02.420: INFO: Number of nodes with available pods: 1 Feb 18 16:12:02.420: INFO: Number of running nodes: 0, number of available pods: 1 Feb 18 16:12:03.426: INFO: Number of nodes with available pods: 0 Feb 18 16:12:03.426: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 18 16:12:03.448: INFO: Number of nodes with available pods: 0 Feb 18 16:12:03.448: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:04.458: INFO: Number of nodes with available pods: 0 Feb 18 16:12:04.458: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:05.585: INFO: Number of nodes with available pods: 0 Feb 18 16:12:05.585: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:06.459: INFO: Number of nodes with available pods: 0 Feb 18 16:12:06.459: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:07.456: INFO: Number of nodes with available pods: 0 Feb 18 16:12:07.456: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:08.461: INFO: Number of nodes with available pods: 0 Feb 18 16:12:08.461: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:09.462: INFO: Number of nodes with available pods: 0 Feb 18 16:12:09.462: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:10.497: INFO: Number of nodes with available pods: 0 Feb 18 16:12:10.497: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:11.456: INFO: Number of nodes with available pods: 0 Feb 18 16:12:11.457: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:12.461: INFO: Number of nodes with available pods: 0 Feb 18 16:12:12.461: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:13.459: INFO: Number of nodes with available pods: 0 Feb 18 16:12:13.459: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:14.788: INFO: Number of nodes with available pods: 0 Feb 18 16:12:14.789: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:15.454: INFO: Number of nodes with available pods: 0 Feb 18 16:12:15.454: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:16.465: INFO: Number of nodes with available pods: 0 Feb 18 16:12:16.465: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:17.471: INFO: Number of nodes with available pods: 0 Feb 18 16:12:17.471: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:18.929: INFO: Number of nodes with available pods: 0 Feb 18 16:12:18.929: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:19.527: INFO: Number of nodes with available pods: 0 Feb 18 16:12:19.528: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:20.462: INFO: Number of nodes with available pods: 0 Feb 18 16:12:20.462: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:12:21.457: INFO: Number of nodes with available pods: 1 Feb 18 16:12:21.457: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8973, will wait for the garbage collector to delete the pods Feb 18 16:12:21.534: INFO: Deleting DaemonSet.extensions daemon-set took: 11.956683ms Feb 18 16:12:21.834: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.755583ms Feb 18 16:12:27.851: INFO: Number of nodes with available pods: 0 Feb 18 16:12:27.851: INFO: Number of running nodes: 0, number of available pods: 0 Feb 18 16:12:27.861: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8973/daemonsets","resourceVersion":"9205549"},"items":null} Feb 18 16:12:27.866: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8973/pods","resourceVersion":"9205549"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:12:27.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8973" for this suite. • [SLOW TEST:37.086 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":280,"completed":60,"skipped":791,"failed":0} S ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:12:28.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating all guestbook components Feb 18 16:12:28.243: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Feb 18 16:12:28.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2680' Feb 18 16:12:28.701: INFO: stderr: "" Feb 18 16:12:28.701: INFO: stdout: "service/agnhost-slave created\n" Feb 18 16:12:28.702: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Feb 18 16:12:28.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2680' Feb 18 16:12:29.024: INFO: stderr: "" Feb 18 16:12:29.024: INFO: stdout: "service/agnhost-master created\n" Feb 18 16:12:29.024: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 18 16:12:29.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2680' Feb 18 16:12:29.482: INFO: stderr: "" Feb 18 16:12:29.482: INFO: stdout: "service/frontend created\n" Feb 18 16:12:29.483: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Feb 18 16:12:29.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2680' Feb 18 16:12:29.907: INFO: stderr: "" Feb 18 16:12:29.907: INFO: stdout: "deployment.apps/frontend created\n" Feb 18 16:12:29.908: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 18 16:12:29.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2680' Feb 18 16:12:30.439: INFO: stderr: "" Feb 18 16:12:30.439: INFO: stdout: "deployment.apps/agnhost-master created\n" Feb 18 16:12:30.440: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 18 16:12:30.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2680' Feb 18 16:12:32.081: INFO: stderr: "" Feb 18 16:12:32.081: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Feb 18 16:12:32.081: INFO: Waiting for all frontend pods to be Running. Feb 18 16:12:52.135: INFO: Waiting for frontend to serve content. Feb 18 16:12:52.163: INFO: Trying to add a new entry to the guestbook. Feb 18 16:12:52.189: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:12:57.210: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:13:02.248: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:13:07.274: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:13:12.285: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:13:17.315: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:13:22.337: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:13:27.361: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:13:32.374: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:13:37.434: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:13:42.463: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:13:47.493: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:13:52.517: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:13:57.539: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:14:02.577: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:14:07.614: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:14:12.642: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:14:17.662: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:14:22.685: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:14:27.706: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:14:32.722: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:14:37.743: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:14:42.760: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:14:47.787: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:14:52.806: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:14:57.829: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:15:02.845: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:15:07.885: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:15:12.898: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:15:17.922: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:15:22.942: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:15:27.974: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:15:32.996: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:15:38.019: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:15:43.048: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:15:48.071: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 18 16:15:53.072: FAIL: Cannot added new entry in 180 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x551f740, 0xc005a98dc0, 0xc0032718f0, 0xc) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339 +0x551 k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:420 +0x165 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00292e300) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a k8s.io/kubernetes/test/e2e.TestE2E(0xc00292e300) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc00292e300, 0x4c9f938) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 STEP: using delete to clean up resources Feb 18 16:15:53.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2680' Feb 18 16:15:55.716: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 18 16:15:55.716: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Feb 18 16:15:55.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2680' Feb 18 16:15:55.966: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 18 16:15:55.966: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Feb 18 16:15:55.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2680' Feb 18 16:15:56.141: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 18 16:15:56.141: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 18 16:15:56.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2680' Feb 18 16:15:56.299: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 18 16:15:56.299: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 18 16:15:56.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2680' Feb 18 16:15:57.070: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 18 16:15:57.070: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Feb 18 16:15:57.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2680' Feb 18 16:15:57.378: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 18 16:15:57.379: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "kubectl-2680". STEP: Found 37 events. Feb 18 16:15:57.388: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-master-74c46fb7d4-9747d: {default-scheduler } Scheduled: Successfully assigned kubectl-2680/agnhost-master-74c46fb7d4-9747d to jerma-node Feb 18 16:15:57.388: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-r8zzb: {default-scheduler } Scheduled: Successfully assigned kubectl-2680/agnhost-slave-774cfc759f-r8zzb to jerma-node Feb 18 16:15:57.388: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-rbv5w: {default-scheduler } Scheduled: Successfully assigned kubectl-2680/agnhost-slave-774cfc759f-rbv5w to jerma-server-mvvl6gufaqub Feb 18 16:15:57.388: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-9ptp8: {default-scheduler } Scheduled: Successfully assigned kubectl-2680/frontend-6c5f89d5d4-9ptp8 to jerma-node Feb 18 16:15:57.388: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-hfvnl: {default-scheduler } Scheduled: Successfully assigned kubectl-2680/frontend-6c5f89d5d4-hfvnl to jerma-server-mvvl6gufaqub Feb 18 16:15:57.389: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-sdk99: {default-scheduler } Scheduled: Successfully assigned kubectl-2680/frontend-6c5f89d5d4-sdk99 to jerma-node Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:29 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3 Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:29 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-sdk99 Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:29 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-hfvnl Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:29 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-9ptp8 Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:30 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1 Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:32 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-9747d Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:32 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2 Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:32 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-r8zzb Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:32 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-rbv5w Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:39 +0000 UTC - event for frontend-6c5f89d5d4-hfvnl: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:39 +0000 UTC - event for frontend-6c5f89d5d4-sdk99: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:41 +0000 UTC - event for agnhost-slave-774cfc759f-rbv5w: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:42 +0000 UTC - event for agnhost-master-74c46fb7d4-9747d: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:42 +0000 UTC - event for frontend-6c5f89d5d4-9ptp8: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:44 +0000 UTC - event for agnhost-slave-774cfc759f-r8zzb: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:45 +0000 UTC - event for frontend-6c5f89d5d4-hfvnl: {kubelet jerma-server-mvvl6gufaqub} Created: Created container guestbook-frontend Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:46 +0000 UTC - event for agnhost-slave-774cfc759f-rbv5w: {kubelet jerma-server-mvvl6gufaqub} Created: Created container slave Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:46 +0000 UTC - event for frontend-6c5f89d5d4-sdk99: {kubelet jerma-node} Created: Created container guestbook-frontend Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:47 +0000 UTC - event for agnhost-slave-774cfc759f-rbv5w: {kubelet jerma-server-mvvl6gufaqub} Started: Started container slave Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:47 +0000 UTC - event for frontend-6c5f89d5d4-hfvnl: {kubelet jerma-server-mvvl6gufaqub} Started: Started container guestbook-frontend Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:48 +0000 UTC - event for agnhost-master-74c46fb7d4-9747d: {kubelet jerma-node} Created: Created container master Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:48 +0000 UTC - event for agnhost-master-74c46fb7d4-9747d: {kubelet jerma-node} Started: Started container master Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:48 +0000 UTC - event for agnhost-slave-774cfc759f-r8zzb: {kubelet jerma-node} Created: Created container slave Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:48 +0000 UTC - event for agnhost-slave-774cfc759f-r8zzb: {kubelet jerma-node} Started: Started container slave Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:48 +0000 UTC - event for frontend-6c5f89d5d4-9ptp8: {kubelet jerma-node} Started: Started container guestbook-frontend Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:48 +0000 UTC - event for frontend-6c5f89d5d4-9ptp8: {kubelet jerma-node} Created: Created container guestbook-frontend Feb 18 16:15:57.389: INFO: At 2020-02-18 16:12:48 +0000 UTC - event for frontend-6c5f89d5d4-sdk99: {kubelet jerma-node} Started: Started container guestbook-frontend Feb 18 16:15:57.389: INFO: At 2020-02-18 16:15:57 +0000 UTC - event for agnhost-master-74c46fb7d4-9747d: {kubelet jerma-node} Killing: Stopping container master Feb 18 16:15:57.389: INFO: At 2020-02-18 16:15:57 +0000 UTC - event for frontend-6c5f89d5d4-9ptp8: {kubelet jerma-node} Killing: Stopping container guestbook-frontend Feb 18 16:15:57.389: INFO: At 2020-02-18 16:15:57 +0000 UTC - event for frontend-6c5f89d5d4-hfvnl: {kubelet jerma-server-mvvl6gufaqub} Killing: Stopping container guestbook-frontend Feb 18 16:15:57.389: INFO: At 2020-02-18 16:15:57 +0000 UTC - event for frontend-6c5f89d5d4-sdk99: {kubelet jerma-node} Killing: Stopping container guestbook-frontend Feb 18 16:15:57.396: INFO: POD NODE PHASE GRACE CONDITIONS Feb 18 16:15:57.396: INFO: agnhost-master-74c46fb7d4-9747d jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:32 +0000 UTC }] Feb 18 16:15:57.396: INFO: agnhost-slave-774cfc759f-r8zzb jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:32 +0000 UTC }] Feb 18 16:15:57.396: INFO: agnhost-slave-774cfc759f-rbv5w jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:32 +0000 UTC }] Feb 18 16:15:57.396: INFO: frontend-6c5f89d5d4-9ptp8 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:30 +0000 UTC }] Feb 18 16:15:57.396: INFO: frontend-6c5f89d5d4-hfvnl jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:29 +0000 UTC }] Feb 18 16:15:57.396: INFO: frontend-6c5f89d5d4-sdk99 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:12:29 +0000 UTC }] Feb 18 16:15:57.396: INFO: Feb 18 16:15:57.410: INFO: Logging node info for node jerma-node Feb 18 16:15:57.434: INFO: Node Info: &Node{ObjectMeta:{jerma-node /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 9205250 0 2020-01-04 11:59:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-18 16:11:22 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-18 16:11:22 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-18 16:11:22 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-18 16:11:22 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 18 16:15:57.436: INFO: Logging kubelet events for node jerma-node Feb 18 16:15:57.444: INFO: Logging pods the kubelet thinks is on node jerma-node Feb 18 16:15:57.540: INFO: agnhost-slave-774cfc759f-r8zzb started at 2020-02-18 16:12:33 +0000 UTC (0+1 container statuses recorded) Feb 18 16:15:57.540: INFO: Container slave ready: true, restart count 0 Feb 18 16:15:57.540: INFO: frontend-6c5f89d5d4-sdk99 started at 2020-02-18 16:12:30 +0000 UTC (0+1 container statuses recorded) Feb 18 16:15:57.540: INFO: Container guestbook-frontend ready: true, restart count 0 Feb 18 16:15:57.540: INFO: agnhost-master-74c46fb7d4-9747d started at 2020-02-18 16:12:32 +0000 UTC (0+1 container statuses recorded) Feb 18 16:15:57.540: INFO: Container master ready: true, restart count 0 Feb 18 16:15:57.540: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded) Feb 18 16:15:57.540: INFO: Container kube-proxy ready: true, restart count 0 Feb 18 16:15:57.540: INFO: frontend-6c5f89d5d4-9ptp8 started at 2020-02-18 16:12:30 +0000 UTC (0+1 container statuses recorded) Feb 18 16:15:57.540: INFO: Container guestbook-frontend ready: true, restart count 0 Feb 18 16:15:57.540: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded) Feb 18 16:15:57.540: INFO: Container weave ready: true, restart count 1 Feb 18 16:15:57.540: INFO: Container weave-npc ready: true, restart count 0 W0218 16:15:57.551073 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 18 16:15:57.596: INFO: Latency metrics for node jerma-node Feb 18 16:15:57.597: INFO: Logging node info for node jerma-server-mvvl6gufaqub Feb 18 16:15:57.605: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 9205787 0 2020-01-04 11:47:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-18 16:13:22 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-18 16:13:22 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-18 16:13:22 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-18 16:13:22 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[ollivier/functest-kubernetes-security@sha256:e07875af6d375759fd233dc464382bb51d2464f6ae50a60625e41226eb1f87be ollivier/functest-kubernetes-security:latest],SizeBytes:1118568659,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 18 16:15:57.606: INFO: Logging kubelet events for node jerma-server-mvvl6gufaqub Feb 18 16:15:57.610: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub Feb 18 16:15:57.630: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Feb 18 16:15:57.630: INFO: Container etcd ready: true, restart count 1 Feb 18 16:15:57.630: INFO: agnhost-slave-774cfc759f-rbv5w started at 2020-02-18 16:12:34 +0000 UTC (0+1 container statuses recorded) Feb 18 16:15:57.630: INFO: Container slave ready: true, restart count 0 Feb 18 16:15:57.630: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Feb 18 16:15:57.630: INFO: Container kube-apiserver ready: true, restart count 1 Feb 18 16:15:57.630: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Feb 18 16:15:57.630: INFO: Container coredns ready: true, restart count 0 Feb 18 16:15:57.630: INFO: frontend-6c5f89d5d4-hfvnl started at 2020-02-18 16:12:30 +0000 UTC (0+1 container statuses recorded) Feb 18 16:15:57.630: INFO: Container guestbook-frontend ready: true, restart count 0 Feb 18 16:15:57.630: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Feb 18 16:15:57.630: INFO: Container coredns ready: true, restart count 0 Feb 18 16:15:57.630: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded) Feb 18 16:15:57.630: INFO: Container kube-proxy ready: true, restart count 0 Feb 18 16:15:57.630: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded) Feb 18 16:15:57.630: INFO: Container weave ready: true, restart count 0 Feb 18 16:15:57.630: INFO: Container weave-npc ready: true, restart count 0 Feb 18 16:15:57.630: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Feb 18 16:15:57.630: INFO: Container kube-controller-manager ready: true, restart count 11 Feb 18 16:15:57.630: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Feb 18 16:15:57.630: INFO: Container kube-scheduler ready: true, restart count 15 W0218 16:15:57.635354 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 18 16:15:58.101: INFO: Latency metrics for node jerma-server-mvvl6gufaqub Feb 18 16:15:58.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2680" for this suite. • Failure [210.246 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388 should create and stop a working application [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:15:53.072: Cannot added new entry in 180 seconds. /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339 ------------------------------ {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":280,"completed":60,"skipped":792,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:15:58.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0218 16:16:03.105345 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 18 16:16:03.105: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:16:03.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7881" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":280,"completed":61,"skipped":853,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:16:03.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:16:22.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-920" for this suite. • [SLOW TEST:19.630 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":280,"completed":62,"skipped":880,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:16:22.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2595 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2595 STEP: creating replication controller externalsvc in namespace services-2595 I0218 16:16:23.220110 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2595, replica count: 2 I0218 16:16:26.271176 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:16:29.271983 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:16:32.272440 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:16:35.272978 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:16:38.273787 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Feb 18 16:16:38.338: INFO: Creating new exec pod Feb 18 16:16:44.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2595 execpodjsthf -- /bin/sh -x -c nslookup nodeport-service' Feb 18 16:16:44.898: INFO: stderr: "I0218 16:16:44.640929 757 log.go:172] (0xc000adb760) (0xc0009b4280) Create stream\nI0218 16:16:44.641109 757 log.go:172] (0xc000adb760) (0xc0009b4280) Stream added, broadcasting: 1\nI0218 16:16:44.644766 757 log.go:172] (0xc000adb760) Reply frame received for 1\nI0218 16:16:44.644812 757 log.go:172] (0xc000adb760) (0xc0009a0000) Create stream\nI0218 16:16:44.644826 757 log.go:172] (0xc000adb760) (0xc0009a0000) Stream added, broadcasting: 3\nI0218 16:16:44.646227 757 log.go:172] (0xc000adb760) Reply frame received for 3\nI0218 16:16:44.646254 757 log.go:172] (0xc000adb760) (0xc000aa4500) Create stream\nI0218 16:16:44.646264 757 log.go:172] (0xc000adb760) (0xc000aa4500) Stream added, broadcasting: 5\nI0218 16:16:44.651518 757 log.go:172] (0xc000adb760) Reply frame received for 5\nI0218 16:16:44.729988 757 log.go:172] (0xc000adb760) Data frame received for 5\nI0218 16:16:44.730037 757 log.go:172] (0xc000aa4500) (5) Data frame handling\nI0218 16:16:44.730053 757 log.go:172] (0xc000aa4500) (5) Data frame sent\nI0218 16:16:44.730058 757 log.go:172] (0xc000adb760) Data frame received for 5\nI0218 16:16:44.730066 757 log.go:172] (0xc000aa4500) (5) Data frame handling\n+ nslookup nodeport-service\nI0218 16:16:44.730099 757 log.go:172] (0xc000aa4500) (5) Data frame sent\nI0218 16:16:44.783790 757 log.go:172] (0xc000adb760) Data frame received for 3\nI0218 16:16:44.783824 757 log.go:172] (0xc0009a0000) (3) Data frame handling\nI0218 16:16:44.783846 757 log.go:172] (0xc0009a0000) (3) Data frame sent\nI0218 16:16:44.787392 757 log.go:172] (0xc000adb760) Data frame received for 3\nI0218 16:16:44.787457 757 log.go:172] (0xc0009a0000) (3) Data frame handling\nI0218 16:16:44.787500 757 log.go:172] (0xc0009a0000) (3) Data frame sent\nI0218 16:16:44.887168 757 log.go:172] (0xc000adb760) (0xc0009a0000) Stream removed, broadcasting: 3\nI0218 16:16:44.887490 757 log.go:172] (0xc000adb760) Data frame received for 1\nI0218 16:16:44.887559 757 log.go:172] (0xc0009b4280) (1) Data frame handling\nI0218 16:16:44.887597 757 log.go:172] (0xc0009b4280) (1) Data frame sent\nI0218 16:16:44.887757 757 log.go:172] (0xc000adb760) (0xc000aa4500) Stream removed, broadcasting: 5\nI0218 16:16:44.887967 757 log.go:172] (0xc000adb760) (0xc0009b4280) Stream removed, broadcasting: 1\nI0218 16:16:44.888066 757 log.go:172] (0xc000adb760) Go away received\nI0218 16:16:44.889187 757 log.go:172] (0xc000adb760) (0xc0009b4280) Stream removed, broadcasting: 1\nI0218 16:16:44.889222 757 log.go:172] (0xc000adb760) (0xc0009a0000) Stream removed, broadcasting: 3\nI0218 16:16:44.889242 757 log.go:172] (0xc000adb760) (0xc000aa4500) Stream removed, broadcasting: 5\n" Feb 18 16:16:44.898: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2595.svc.cluster.local\tcanonical name = externalsvc.services-2595.svc.cluster.local.\nName:\texternalsvc.services-2595.svc.cluster.local\nAddress: 10.96.246.40\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2595, will wait for the garbage collector to delete the pods Feb 18 16:16:44.963: INFO: Deleting ReplicationController externalsvc took: 8.79227ms Feb 18 16:16:45.264: INFO: Terminating ReplicationController externalsvc pods took: 300.419186ms Feb 18 16:17:02.587: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:17:02.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2595" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:39.999 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":280,"completed":63,"skipped":885,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:17:02.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:17:52.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9897" for this suite. • [SLOW TEST:49.873 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":280,"completed":64,"skipped":888,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:17:52.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:17:52.910: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-64eb6355-656b-4807-9445-fac6827b074f" in namespace "security-context-test-5074" to be "success or failure" Feb 18 16:17:52.920: INFO: Pod "alpine-nnp-false-64eb6355-656b-4807-9445-fac6827b074f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.585184ms Feb 18 16:17:54.926: INFO: Pod "alpine-nnp-false-64eb6355-656b-4807-9445-fac6827b074f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015616198s Feb 18 16:17:56.934: INFO: Pod "alpine-nnp-false-64eb6355-656b-4807-9445-fac6827b074f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022943534s Feb 18 16:17:58.939: INFO: Pod "alpine-nnp-false-64eb6355-656b-4807-9445-fac6827b074f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028609399s Feb 18 16:18:00.947: INFO: Pod "alpine-nnp-false-64eb6355-656b-4807-9445-fac6827b074f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036343872s Feb 18 16:18:00.947: INFO: Pod "alpine-nnp-false-64eb6355-656b-4807-9445-fac6827b074f" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:18:00.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5074" for this suite. • [SLOW TEST:8.360 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":65,"skipped":900,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:18:00.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 18 16:18:01.181: INFO: Waiting up to 5m0s for pod "downward-api-b01c5044-1495-4362-9281-9b0cbd44110e" in namespace "downward-api-2805" to be "success or failure" Feb 18 16:18:01.245: INFO: Pod "downward-api-b01c5044-1495-4362-9281-9b0cbd44110e": Phase="Pending", Reason="", readiness=false. Elapsed: 63.599669ms Feb 18 16:18:03.254: INFO: Pod "downward-api-b01c5044-1495-4362-9281-9b0cbd44110e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072514172s Feb 18 16:18:05.263: INFO: Pod "downward-api-b01c5044-1495-4362-9281-9b0cbd44110e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081388298s Feb 18 16:18:07.273: INFO: Pod "downward-api-b01c5044-1495-4362-9281-9b0cbd44110e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091610651s Feb 18 16:18:09.280: INFO: Pod "downward-api-b01c5044-1495-4362-9281-9b0cbd44110e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098402838s STEP: Saw pod success Feb 18 16:18:09.280: INFO: Pod "downward-api-b01c5044-1495-4362-9281-9b0cbd44110e" satisfied condition "success or failure" Feb 18 16:18:09.286: INFO: Trying to get logs from node jerma-node pod downward-api-b01c5044-1495-4362-9281-9b0cbd44110e container dapi-container: STEP: delete the pod Feb 18 16:18:09.327: INFO: Waiting for pod downward-api-b01c5044-1495-4362-9281-9b0cbd44110e to disappear Feb 18 16:18:09.331: INFO: Pod downward-api-b01c5044-1495-4362-9281-9b0cbd44110e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:18:09.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2805" for this suite. • [SLOW TEST:8.361 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":280,"completed":66,"skipped":906,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:18:09.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:18:09.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5498" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":280,"completed":67,"skipped":928,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:18:09.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's args Feb 18 16:18:09.880: INFO: Waiting up to 5m0s for pod "var-expansion-567a5213-cea7-4b2d-9305-ba66bbcdfcda" in namespace "var-expansion-3038" to be "success or failure" Feb 18 16:18:09.944: INFO: Pod "var-expansion-567a5213-cea7-4b2d-9305-ba66bbcdfcda": Phase="Pending", Reason="", readiness=false. Elapsed: 63.043147ms Feb 18 16:18:11.951: INFO: Pod "var-expansion-567a5213-cea7-4b2d-9305-ba66bbcdfcda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070274584s Feb 18 16:18:13.962: INFO: Pod "var-expansion-567a5213-cea7-4b2d-9305-ba66bbcdfcda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081057982s Feb 18 16:18:15.969: INFO: Pod "var-expansion-567a5213-cea7-4b2d-9305-ba66bbcdfcda": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088224935s Feb 18 16:18:17.975: INFO: Pod "var-expansion-567a5213-cea7-4b2d-9305-ba66bbcdfcda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.09423864s STEP: Saw pod success Feb 18 16:18:17.975: INFO: Pod "var-expansion-567a5213-cea7-4b2d-9305-ba66bbcdfcda" satisfied condition "success or failure" Feb 18 16:18:17.979: INFO: Trying to get logs from node jerma-node pod var-expansion-567a5213-cea7-4b2d-9305-ba66bbcdfcda container dapi-container: STEP: delete the pod Feb 18 16:18:18.011: INFO: Waiting for pod var-expansion-567a5213-cea7-4b2d-9305-ba66bbcdfcda to disappear Feb 18 16:18:18.042: INFO: Pod var-expansion-567a5213-cea7-4b2d-9305-ba66bbcdfcda no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:18:18.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3038" for this suite. • [SLOW TEST:8.502 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":280,"completed":68,"skipped":933,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:18:18.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test hostPath mode Feb 18 16:18:18.330: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7966" to be "success or failure" Feb 18 16:18:18.444: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 113.603702ms Feb 18 16:18:20.455: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124520038s Feb 18 16:18:22.465: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134494647s Feb 18 16:18:24.490: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159533806s Feb 18 16:18:26.518: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187714881s Feb 18 16:18:28.536: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.20494819s Feb 18 16:18:30.549: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.21845534s STEP: Saw pod success Feb 18 16:18:30.549: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 18 16:18:30.556: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 18 16:18:30.623: INFO: Waiting for pod pod-host-path-test to disappear Feb 18 16:18:30.630: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:18:30.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7966" for this suite. • [SLOW TEST:12.593 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":69,"skipped":947,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:18:30.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 18 16:18:30.885: INFO: Waiting up to 5m0s for pod "downwardapi-volume-55d9a118-f1d3-457a-9ce8-355c79ae3e5d" in namespace "projected-5359" to be "success or failure" Feb 18 16:18:30.913: INFO: Pod "downwardapi-volume-55d9a118-f1d3-457a-9ce8-355c79ae3e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.961023ms Feb 18 16:18:32.921: INFO: Pod "downwardapi-volume-55d9a118-f1d3-457a-9ce8-355c79ae3e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035217654s Feb 18 16:18:34.929: INFO: Pod "downwardapi-volume-55d9a118-f1d3-457a-9ce8-355c79ae3e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043213997s Feb 18 16:18:36.937: INFO: Pod "downwardapi-volume-55d9a118-f1d3-457a-9ce8-355c79ae3e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051402313s Feb 18 16:18:38.945: INFO: Pod "downwardapi-volume-55d9a118-f1d3-457a-9ce8-355c79ae3e5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05934816s STEP: Saw pod success Feb 18 16:18:38.945: INFO: Pod "downwardapi-volume-55d9a118-f1d3-457a-9ce8-355c79ae3e5d" satisfied condition "success or failure" Feb 18 16:18:38.949: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-55d9a118-f1d3-457a-9ce8-355c79ae3e5d container client-container: STEP: delete the pod Feb 18 16:18:39.180: INFO: Waiting for pod downwardapi-volume-55d9a118-f1d3-457a-9ce8-355c79ae3e5d to disappear Feb 18 16:18:39.210: INFO: Pod downwardapi-volume-55d9a118-f1d3-457a-9ce8-355c79ae3e5d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:18:39.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5359" for this suite. • [SLOW TEST:8.590 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":70,"skipped":972,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:18:39.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:18:52.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3760" for this suite. • [SLOW TEST:13.516 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":280,"completed":71,"skipped":990,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:18:52.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-f4b70671-53eb-438d-8e55-83d09b870b5a STEP: Creating secret with name s-test-opt-upd-6ab29d98-7e5b-4b95-88f3-1781d7d6d9ed STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f4b70671-53eb-438d-8e55-83d09b870b5a STEP: Updating secret s-test-opt-upd-6ab29d98-7e5b-4b95-88f3-1781d7d6d9ed STEP: Creating secret with name s-test-opt-create-5638c53e-5642-473e-b44b-6c3e021662bc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:20:22.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8101" for this suite. • [SLOW TEST:89.730 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":72,"skipped":1001,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:20:22.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override all Feb 18 16:20:22.636: INFO: Waiting up to 5m0s for pod "client-containers-e4ba0f51-d5b5-4ad2-9365-a2b014a50fcc" in namespace "containers-3327" to be "success or failure" Feb 18 16:20:22.647: INFO: Pod "client-containers-e4ba0f51-d5b5-4ad2-9365-a2b014a50fcc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.293726ms Feb 18 16:20:24.654: INFO: Pod "client-containers-e4ba0f51-d5b5-4ad2-9365-a2b014a50fcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017265748s Feb 18 16:20:26.662: INFO: Pod "client-containers-e4ba0f51-d5b5-4ad2-9365-a2b014a50fcc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025767057s Feb 18 16:20:28.678: INFO: Pod "client-containers-e4ba0f51-d5b5-4ad2-9365-a2b014a50fcc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042049505s Feb 18 16:20:30.690: INFO: Pod "client-containers-e4ba0f51-d5b5-4ad2-9365-a2b014a50fcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053386888s STEP: Saw pod success Feb 18 16:20:30.690: INFO: Pod "client-containers-e4ba0f51-d5b5-4ad2-9365-a2b014a50fcc" satisfied condition "success or failure" Feb 18 16:20:30.696: INFO: Trying to get logs from node jerma-node pod client-containers-e4ba0f51-d5b5-4ad2-9365-a2b014a50fcc container test-container: STEP: delete the pod Feb 18 16:20:30.754: INFO: Waiting for pod client-containers-e4ba0f51-d5b5-4ad2-9365-a2b014a50fcc to disappear Feb 18 16:20:30.784: INFO: Pod client-containers-e4ba0f51-d5b5-4ad2-9365-a2b014a50fcc no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:20:30.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3327" for this suite. • [SLOW TEST:8.309 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":280,"completed":73,"skipped":1015,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:20:30.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-b340f703-caa9-437b-933b-05ab41836aa9 STEP: Creating a pod to test consume secrets Feb 18 16:20:30.974: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-122e2c88-c1e3-48fd-ba98-9ccd607c80e6" in namespace "projected-8474" to be "success or failure" Feb 18 16:20:31.078: INFO: Pod "pod-projected-secrets-122e2c88-c1e3-48fd-ba98-9ccd607c80e6": Phase="Pending", Reason="", readiness=false. Elapsed: 103.551738ms Feb 18 16:20:33.082: INFO: Pod "pod-projected-secrets-122e2c88-c1e3-48fd-ba98-9ccd607c80e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108383863s Feb 18 16:20:35.091: INFO: Pod "pod-projected-secrets-122e2c88-c1e3-48fd-ba98-9ccd607c80e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116921788s Feb 18 16:20:37.101: INFO: Pod "pod-projected-secrets-122e2c88-c1e3-48fd-ba98-9ccd607c80e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126765349s Feb 18 16:20:39.106: INFO: Pod "pod-projected-secrets-122e2c88-c1e3-48fd-ba98-9ccd607c80e6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131884697s Feb 18 16:20:41.115: INFO: Pod "pod-projected-secrets-122e2c88-c1e3-48fd-ba98-9ccd607c80e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.141272367s STEP: Saw pod success Feb 18 16:20:41.115: INFO: Pod "pod-projected-secrets-122e2c88-c1e3-48fd-ba98-9ccd607c80e6" satisfied condition "success or failure" Feb 18 16:20:41.119: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-122e2c88-c1e3-48fd-ba98-9ccd607c80e6 container projected-secret-volume-test: STEP: delete the pod Feb 18 16:20:41.151: INFO: Waiting for pod pod-projected-secrets-122e2c88-c1e3-48fd-ba98-9ccd607c80e6 to disappear Feb 18 16:20:41.178: INFO: Pod pod-projected-secrets-122e2c88-c1e3-48fd-ba98-9ccd607c80e6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:20:41.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8474" for this suite. • [SLOW TEST:10.443 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":74,"skipped":1016,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:20:41.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 18 16:20:48.450: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:20:48.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3213" for this suite. • [SLOW TEST:7.327 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":75,"skipped":1018,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:20:48.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1678 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1678 STEP: creating replication controller externalsvc in namespace services-1678 I0218 16:20:48.769124 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-1678, replica count: 2 I0218 16:20:51.820879 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:20:54.822055 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:20:57.823202 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:21:00.824464 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Feb 18 16:21:00.980: INFO: Creating new exec pod Feb 18 16:21:07.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1678 execpodmzh5v -- /bin/sh -x -c nslookup clusterip-service' Feb 18 16:21:07.504: INFO: stderr: "I0218 16:21:07.251937 777 log.go:172] (0xc0003e7290) (0xc00091a640) Create stream\nI0218 16:21:07.252055 777 log.go:172] (0xc0003e7290) (0xc00091a640) Stream added, broadcasting: 1\nI0218 16:21:07.278248 777 log.go:172] (0xc0003e7290) Reply frame received for 1\nI0218 16:21:07.278339 777 log.go:172] (0xc0003e7290) (0xc000699cc0) Create stream\nI0218 16:21:07.278370 777 log.go:172] (0xc0003e7290) (0xc000699cc0) Stream added, broadcasting: 3\nI0218 16:21:07.280907 777 log.go:172] (0xc0003e7290) Reply frame received for 3\nI0218 16:21:07.281000 777 log.go:172] (0xc0003e7290) (0xc0006708c0) Create stream\nI0218 16:21:07.281018 777 log.go:172] (0xc0003e7290) (0xc0006708c0) Stream added, broadcasting: 5\nI0218 16:21:07.283412 777 log.go:172] (0xc0003e7290) Reply frame received for 5\nI0218 16:21:07.367398 777 log.go:172] (0xc0003e7290) Data frame received for 5\nI0218 16:21:07.367465 777 log.go:172] (0xc0006708c0) (5) Data frame handling\nI0218 16:21:07.367548 777 log.go:172] (0xc0006708c0) (5) Data frame sent\n+ nslookup clusterip-service\nI0218 16:21:07.398683 777 log.go:172] (0xc0003e7290) Data frame received for 3\nI0218 16:21:07.398733 777 log.go:172] (0xc000699cc0) (3) Data frame handling\nI0218 16:21:07.398762 777 log.go:172] (0xc000699cc0) (3) Data frame sent\nI0218 16:21:07.399581 777 log.go:172] (0xc0003e7290) Data frame received for 3\nI0218 16:21:07.399605 777 log.go:172] (0xc000699cc0) (3) Data frame handling\nI0218 16:21:07.399623 777 log.go:172] (0xc000699cc0) (3) Data frame sent\nI0218 16:21:07.484004 777 log.go:172] (0xc0003e7290) Data frame received for 1\nI0218 16:21:07.484549 777 log.go:172] (0xc0003e7290) (0xc000699cc0) Stream removed, broadcasting: 3\nI0218 16:21:07.484765 777 log.go:172] (0xc0003e7290) (0xc0006708c0) Stream removed, broadcasting: 5\nI0218 16:21:07.484836 777 log.go:172] (0xc00091a640) (1) Data frame handling\nI0218 16:21:07.484873 777 log.go:172] (0xc00091a640) (1) Data frame sent\nI0218 16:21:07.484902 777 log.go:172] (0xc0003e7290) (0xc00091a640) Stream removed, broadcasting: 1\nI0218 16:21:07.484938 777 log.go:172] (0xc0003e7290) Go away received\nI0218 16:21:07.486780 777 log.go:172] (0xc0003e7290) (0xc00091a640) Stream removed, broadcasting: 1\nI0218 16:21:07.486814 777 log.go:172] (0xc0003e7290) (0xc000699cc0) Stream removed, broadcasting: 3\nI0218 16:21:07.486833 777 log.go:172] (0xc0003e7290) (0xc0006708c0) Stream removed, broadcasting: 5\n" Feb 18 16:21:07.505: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1678.svc.cluster.local\tcanonical name = externalsvc.services-1678.svc.cluster.local.\nName:\texternalsvc.services-1678.svc.cluster.local\nAddress: 10.96.202.251\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1678, will wait for the garbage collector to delete the pods Feb 18 16:21:07.570: INFO: Deleting ReplicationController externalsvc took: 7.90655ms Feb 18 16:21:07.871: INFO: Terminating ReplicationController externalsvc pods took: 301.34697ms Feb 18 16:21:23.223: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:21:23.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1678" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:34.749 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":280,"completed":76,"skipped":1030,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:21:23.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-006bac98-0ee5-45e9-a8fd-dd1b4fff0e45 STEP: Creating a pod to test consume configMaps Feb 18 16:21:23.389: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-67f25034-808d-407e-85f9-c730e10438c1" in namespace "projected-6987" to be "success or failure" Feb 18 16:21:23.474: INFO: Pod "pod-projected-configmaps-67f25034-808d-407e-85f9-c730e10438c1": Phase="Pending", Reason="", readiness=false. Elapsed: 84.976286ms Feb 18 16:21:25.482: INFO: Pod "pod-projected-configmaps-67f25034-808d-407e-85f9-c730e10438c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092825092s Feb 18 16:21:27.493: INFO: Pod "pod-projected-configmaps-67f25034-808d-407e-85f9-c730e10438c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104093855s Feb 18 16:21:29.501: INFO: Pod "pod-projected-configmaps-67f25034-808d-407e-85f9-c730e10438c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112019237s Feb 18 16:21:31.510: INFO: Pod "pod-projected-configmaps-67f25034-808d-407e-85f9-c730e10438c1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120958303s Feb 18 16:21:33.523: INFO: Pod "pod-projected-configmaps-67f25034-808d-407e-85f9-c730e10438c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.134409717s STEP: Saw pod success Feb 18 16:21:33.524: INFO: Pod "pod-projected-configmaps-67f25034-808d-407e-85f9-c730e10438c1" satisfied condition "success or failure" Feb 18 16:21:33.532: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-67f25034-808d-407e-85f9-c730e10438c1 container projected-configmap-volume-test: STEP: delete the pod Feb 18 16:21:33.625: INFO: Waiting for pod pod-projected-configmaps-67f25034-808d-407e-85f9-c730e10438c1 to disappear Feb 18 16:21:33.635: INFO: Pod pod-projected-configmaps-67f25034-808d-407e-85f9-c730e10438c1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:21:33.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6987" for this suite. • [SLOW TEST:10.342 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":77,"skipped":1052,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:21:33.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:21:33.774: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Feb 18 16:21:36.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9115 create -f -' Feb 18 16:21:39.390: INFO: stderr: "" Feb 18 16:21:39.391: INFO: stdout: "e2e-test-crd-publish-openapi-2018-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 18 16:21:39.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9115 delete e2e-test-crd-publish-openapi-2018-crds test-foo' Feb 18 16:21:39.603: INFO: stderr: "" Feb 18 16:21:39.603: INFO: stdout: "e2e-test-crd-publish-openapi-2018-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Feb 18 16:21:39.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9115 apply -f -' Feb 18 16:21:40.057: INFO: stderr: "" Feb 18 16:21:40.057: INFO: stdout: "e2e-test-crd-publish-openapi-2018-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 18 16:21:40.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9115 delete e2e-test-crd-publish-openapi-2018-crds test-foo' Feb 18 16:21:40.204: INFO: stderr: "" Feb 18 16:21:40.204: INFO: stdout: "e2e-test-crd-publish-openapi-2018-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Feb 18 16:21:40.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9115 create -f -' Feb 18 16:21:40.618: INFO: rc: 1 Feb 18 16:21:40.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9115 apply -f -' Feb 18 16:21:41.007: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Feb 18 16:21:41.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9115 create -f -' Feb 18 16:21:41.319: INFO: rc: 1 Feb 18 16:21:41.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9115 apply -f -' Feb 18 16:21:41.642: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Feb 18 16:21:41.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2018-crds' Feb 18 16:21:42.053: INFO: stderr: "" Feb 18 16:21:42.053: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2018-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Feb 18 16:21:42.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2018-crds.metadata' Feb 18 16:21:42.427: INFO: stderr: "" Feb 18 16:21:42.427: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2018-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Feb 18 16:21:42.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2018-crds.spec' Feb 18 16:21:42.775: INFO: stderr: "" Feb 18 16:21:42.775: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2018-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Feb 18 16:21:42.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2018-crds.spec.bars' Feb 18 16:21:43.112: INFO: stderr: "" Feb 18 16:21:43.112: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2018-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Feb 18 16:21:43.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2018-crds.spec.bars2' Feb 18 16:21:43.352: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:21:45.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9115" for this suite. • [SLOW TEST:11.755 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":280,"completed":78,"skipped":1058,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:21:45.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:21:45.474: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:21:47.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9445" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":280,"completed":79,"skipped":1059,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:21:47.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 18 16:21:47.123: INFO: Waiting up to 5m0s for pod "pod-f7b123d1-97bb-4536-9574-4b133d42da31" in namespace "emptydir-3068" to be "success or failure" Feb 18 16:21:47.166: INFO: Pod "pod-f7b123d1-97bb-4536-9574-4b133d42da31": Phase="Pending", Reason="", readiness=false. Elapsed: 42.64254ms Feb 18 16:21:49.174: INFO: Pod "pod-f7b123d1-97bb-4536-9574-4b133d42da31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05068379s Feb 18 16:21:51.211: INFO: Pod "pod-f7b123d1-97bb-4536-9574-4b133d42da31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088044019s Feb 18 16:21:53.219: INFO: Pod "pod-f7b123d1-97bb-4536-9574-4b133d42da31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09597009s Feb 18 16:21:56.347: INFO: Pod "pod-f7b123d1-97bb-4536-9574-4b133d42da31": Phase="Pending", Reason="", readiness=false. Elapsed: 9.223740784s Feb 18 16:21:58.353: INFO: Pod "pod-f7b123d1-97bb-4536-9574-4b133d42da31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.229749355s STEP: Saw pod success Feb 18 16:21:58.353: INFO: Pod "pod-f7b123d1-97bb-4536-9574-4b133d42da31" satisfied condition "success or failure" Feb 18 16:21:58.356: INFO: Trying to get logs from node jerma-node pod pod-f7b123d1-97bb-4536-9574-4b133d42da31 container test-container: STEP: delete the pod Feb 18 16:21:58.407: INFO: Waiting for pod pod-f7b123d1-97bb-4536-9574-4b133d42da31 to disappear Feb 18 16:21:58.454: INFO: Pod pod-f7b123d1-97bb-4536-9574-4b133d42da31 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:21:58.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3068" for this suite. • [SLOW TEST:11.463 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":80,"skipped":1063,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:21:58.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:21:58.777: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e5aef735-bd8e-47a1-99a5-67343d57293d", Controller:(*bool)(0xc002adebc2), BlockOwnerDeletion:(*bool)(0xc002adebc3)}} Feb 18 16:21:58.785: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"16ab042d-0d2d-4e88-bca3-94fa90a816c4", Controller:(*bool)(0xc002adedda), BlockOwnerDeletion:(*bool)(0xc002adeddb)}} Feb 18 16:21:58.830: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"57f96db1-ac3b-4398-ac4e-df73441487f8", Controller:(*bool)(0xc002a424ea), BlockOwnerDeletion:(*bool)(0xc002a424eb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:22:03.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4247" for this suite. • [SLOW TEST:5.394 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":280,"completed":81,"skipped":1065,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:22:03.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 18 16:22:04.076: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:22:20.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9138" for this suite. • [SLOW TEST:17.075 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":280,"completed":82,"skipped":1065,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:22:20.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:22:37.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6006" for this suite. • [SLOW TEST:16.277 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":280,"completed":83,"skipped":1078,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:22:37.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:22:37.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3154" for this suite. STEP: Destroying namespace "nspatchtest-6dc19b9b-4842-4ce6-9cb4-1e253ba3a503-5075" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":280,"completed":84,"skipped":1079,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:22:37.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Feb 18 16:22:37.758: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:22:57.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5054" for this suite. • [SLOW TEST:19.396 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":280,"completed":85,"skipped":1086,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:22:57.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:22:57.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7536" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":280,"completed":86,"skipped":1090,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:22:57.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 18 16:23:11.403: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 18 16:23:11.411: INFO: Pod pod-with-poststart-exec-hook still exists Feb 18 16:23:13.411: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 18 16:23:13.421: INFO: Pod pod-with-poststart-exec-hook still exists Feb 18 16:23:15.411: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 18 16:23:15.424: INFO: Pod pod-with-poststart-exec-hook still exists Feb 18 16:23:17.411: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 18 16:23:17.421: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:23:17.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7349" for this suite. • [SLOW TEST:20.255 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":280,"completed":87,"skipped":1096,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:23:17.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-downwardapi-t4bp STEP: Creating a pod to test atomic-volume-subpath Feb 18 16:23:17.614: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-t4bp" in namespace "subpath-7400" to be "success or failure" Feb 18 16:23:17.629: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Pending", Reason="", readiness=false. Elapsed: 15.622449ms Feb 18 16:23:19.709: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095314143s Feb 18 16:23:22.480: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.866420949s Feb 18 16:23:24.788: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Pending", Reason="", readiness=false. Elapsed: 7.174818927s Feb 18 16:23:26.794: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Pending", Reason="", readiness=false. Elapsed: 9.180567636s Feb 18 16:23:28.803: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Running", Reason="", readiness=true. Elapsed: 11.189225682s Feb 18 16:23:30.812: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Running", Reason="", readiness=true. Elapsed: 13.198407694s Feb 18 16:23:32.820: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Running", Reason="", readiness=true. Elapsed: 15.205875083s Feb 18 16:23:34.828: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Running", Reason="", readiness=true. Elapsed: 17.214640983s Feb 18 16:23:36.836: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Running", Reason="", readiness=true. Elapsed: 19.222212645s Feb 18 16:23:38.842: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Running", Reason="", readiness=true. Elapsed: 21.228779545s Feb 18 16:23:40.857: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Running", Reason="", readiness=true. Elapsed: 23.242991348s Feb 18 16:23:42.871: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Running", Reason="", readiness=true. Elapsed: 25.257435456s Feb 18 16:23:44.882: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Running", Reason="", readiness=true. Elapsed: 27.268785982s Feb 18 16:23:46.890: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Running", Reason="", readiness=true. Elapsed: 29.276737378s Feb 18 16:23:48.895: INFO: Pod "pod-subpath-test-downwardapi-t4bp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.281313471s STEP: Saw pod success Feb 18 16:23:48.895: INFO: Pod "pod-subpath-test-downwardapi-t4bp" satisfied condition "success or failure" Feb 18 16:23:48.901: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-t4bp container test-container-subpath-downwardapi-t4bp: STEP: delete the pod Feb 18 16:23:48.940: INFO: Waiting for pod pod-subpath-test-downwardapi-t4bp to disappear Feb 18 16:23:48.972: INFO: Pod pod-subpath-test-downwardapi-t4bp no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-t4bp Feb 18 16:23:48.973: INFO: Deleting pod "pod-subpath-test-downwardapi-t4bp" in namespace "subpath-7400" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:23:48.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7400" for this suite. • [SLOW TEST:31.516 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":280,"completed":88,"skipped":1138,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:23:48.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-4cae48bd-502a-4ad8-8474-5c3af37dd20d STEP: Creating a pod to test consume configMaps Feb 18 16:23:49.227: INFO: Waiting up to 5m0s for pod "pod-configmaps-460e8477-f29e-455a-8574-7bb4c3d9b0be" in namespace "configmap-1881" to be "success or failure" Feb 18 16:23:49.239: INFO: Pod "pod-configmaps-460e8477-f29e-455a-8574-7bb4c3d9b0be": Phase="Pending", Reason="", readiness=false. Elapsed: 11.880718ms Feb 18 16:23:51.250: INFO: Pod "pod-configmaps-460e8477-f29e-455a-8574-7bb4c3d9b0be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022499445s Feb 18 16:23:53.256: INFO: Pod "pod-configmaps-460e8477-f29e-455a-8574-7bb4c3d9b0be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029214282s Feb 18 16:23:55.264: INFO: Pod "pod-configmaps-460e8477-f29e-455a-8574-7bb4c3d9b0be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037018552s Feb 18 16:23:57.272: INFO: Pod "pod-configmaps-460e8477-f29e-455a-8574-7bb4c3d9b0be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044530176s Feb 18 16:23:59.279: INFO: Pod "pod-configmaps-460e8477-f29e-455a-8574-7bb4c3d9b0be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.052375651s STEP: Saw pod success Feb 18 16:23:59.280: INFO: Pod "pod-configmaps-460e8477-f29e-455a-8574-7bb4c3d9b0be" satisfied condition "success or failure" Feb 18 16:23:59.285: INFO: Trying to get logs from node jerma-node pod pod-configmaps-460e8477-f29e-455a-8574-7bb4c3d9b0be container configmap-volume-test: STEP: delete the pod Feb 18 16:23:59.381: INFO: Waiting for pod pod-configmaps-460e8477-f29e-455a-8574-7bb4c3d9b0be to disappear Feb 18 16:23:59.387: INFO: Pod pod-configmaps-460e8477-f29e-455a-8574-7bb4c3d9b0be no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:23:59.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1881" for this suite. • [SLOW TEST:10.413 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":89,"skipped":1140,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:23:59.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466 STEP: creating an pod Feb 18 16:23:59.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-3750 -- logs-generator --log-lines-total 100 --run-duration 20s' Feb 18 16:23:59.646: INFO: stderr: "" Feb 18 16:23:59.646: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Waiting for log generator to start. Feb 18 16:23:59.647: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Feb 18 16:23:59.647: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3750" to be "running and ready, or succeeded" Feb 18 16:23:59.676: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 29.130214ms Feb 18 16:24:01.683: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036193405s Feb 18 16:24:03.709: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061750848s Feb 18 16:24:05.715: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068305261s Feb 18 16:24:07.723: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.076588151s Feb 18 16:24:07.724: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Feb 18 16:24:07.724: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Feb 18 16:24:07.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3750' Feb 18 16:24:07.946: INFO: stderr: "" Feb 18 16:24:07.947: INFO: stdout: "I0218 16:24:06.320328 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/tjn 562\nI0218 16:24:06.520879 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/htz 293\nI0218 16:24:06.721287 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/nf2 423\nI0218 16:24:06.921987 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/wxpr 340\nI0218 16:24:07.121797 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/xwgh 553\nI0218 16:24:07.320746 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/5vp 477\nI0218 16:24:07.520841 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/f68 498\nI0218 16:24:07.721026 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/7r2 472\nI0218 16:24:07.920823 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/tcgr 283\n" STEP: limiting log lines Feb 18 16:24:07.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3750 --tail=1' Feb 18 16:24:08.096: INFO: stderr: "" Feb 18 16:24:08.096: INFO: stdout: "I0218 16:24:07.920823 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/tcgr 283\n" Feb 18 16:24:08.097: INFO: got output "I0218 16:24:07.920823 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/tcgr 283\n" STEP: limiting log bytes Feb 18 16:24:08.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3750 --limit-bytes=1' Feb 18 16:24:08.211: INFO: stderr: "" Feb 18 16:24:08.211: INFO: stdout: "I" Feb 18 16:24:08.211: INFO: got output "I" STEP: exposing timestamps Feb 18 16:24:08.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3750 --tail=1 --timestamps' Feb 18 16:24:08.326: INFO: stderr: "" Feb 18 16:24:08.326: INFO: stdout: "2020-02-18T16:24:08.121748934Z I0218 16:24:08.120925 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/vms 250\n" Feb 18 16:24:08.326: INFO: got output "2020-02-18T16:24:08.121748934Z I0218 16:24:08.120925 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/vms 250\n" STEP: restricting to a time range Feb 18 16:24:10.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3750 --since=1s' Feb 18 16:24:11.047: INFO: stderr: "" Feb 18 16:24:11.047: INFO: stdout: "I0218 16:24:10.120709 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/c4cd 221\nI0218 16:24:10.320776 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/5xg 292\nI0218 16:24:10.520939 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/xv5 579\nI0218 16:24:10.720836 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/zmc7 518\nI0218 16:24:10.921043 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/zwt 370\n" Feb 18 16:24:11.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3750 --since=24h' Feb 18 16:24:11.158: INFO: stderr: "" Feb 18 16:24:11.158: INFO: stdout: "I0218 16:24:06.320328 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/tjn 562\nI0218 16:24:06.520879 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/htz 293\nI0218 16:24:06.721287 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/nf2 423\nI0218 16:24:06.921987 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/wxpr 340\nI0218 16:24:07.121797 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/xwgh 553\nI0218 16:24:07.320746 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/5vp 477\nI0218 16:24:07.520841 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/f68 498\nI0218 16:24:07.721026 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/7r2 472\nI0218 16:24:07.920823 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/tcgr 283\nI0218 16:24:08.120925 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/vms 250\nI0218 16:24:08.320673 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/mpwp 342\nI0218 16:24:08.520848 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/czt6 490\nI0218 16:24:08.720917 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/vxtk 254\nI0218 16:24:08.920716 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/klsv 261\nI0218 16:24:09.120777 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/flf 214\nI0218 16:24:09.320857 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/w58r 387\nI0218 16:24:09.520865 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/l9n 577\nI0218 16:24:09.720890 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/sxd 563\nI0218 16:24:09.920860 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/tvv 422\nI0218 16:24:10.120709 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/c4cd 221\nI0218 16:24:10.320776 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/5xg 292\nI0218 16:24:10.520939 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/xv5 579\nI0218 16:24:10.720836 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/zmc7 518\nI0218 16:24:10.921043 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/zwt 370\nI0218 16:24:11.120939 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/5r78 422\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1472 Feb 18 16:24:11.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3750' Feb 18 16:24:22.402: INFO: stderr: "" Feb 18 16:24:22.402: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:24:22.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3750" for this suite. • [SLOW TEST:23.083 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":280,"completed":90,"skipped":1151,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:24:22.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5773, will wait for the garbage collector to delete the pods Feb 18 16:24:34.970: INFO: Deleting Job.batch foo took: 9.853524ms Feb 18 16:24:35.271: INFO: Terminating Job.batch foo pods took: 300.530279ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:25:22.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5773" for this suite. • [SLOW TEST:59.921 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":280,"completed":91,"skipped":1166,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:25:22.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2314 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2314;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2314 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2314;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2314.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2314.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2314.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2314.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2314.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2314.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2314.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2314.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2314.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2314.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2314.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2314.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2314.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 234.100.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.100.234_udp@PTR;check="$$(dig +tcp +noall +answer +search 234.100.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.100.234_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2314 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2314;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2314 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2314;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2314.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2314.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2314.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2314.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2314.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2314.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2314.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2314.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2314.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2314.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2314.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2314.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2314.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 234.100.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.100.234_udp@PTR;check="$$(dig +tcp +noall +answer +search 234.100.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.100.234_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 18 16:25:32.808: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.813: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.817: INFO: Unable to read wheezy_udp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.823: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.827: INFO: Unable to read wheezy_udp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.830: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.833: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.836: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.882: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.886: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.891: INFO: Unable to read jessie_udp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.894: INFO: Unable to read jessie_tcp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.899: INFO: Unable to read jessie_udp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.903: INFO: Unable to read jessie_tcp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.910: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.913: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:32.941: INFO: Lookups using dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2314 wheezy_tcp@dns-test-service.dns-2314 wheezy_udp@dns-test-service.dns-2314.svc wheezy_tcp@dns-test-service.dns-2314.svc wheezy_udp@_http._tcp.dns-test-service.dns-2314.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2314.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2314 jessie_tcp@dns-test-service.dns-2314 jessie_udp@dns-test-service.dns-2314.svc jessie_tcp@dns-test-service.dns-2314.svc jessie_udp@_http._tcp.dns-test-service.dns-2314.svc jessie_tcp@_http._tcp.dns-test-service.dns-2314.svc] Feb 18 16:25:37.950: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:37.954: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:37.959: INFO: Unable to read wheezy_udp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:37.964: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:37.968: INFO: Unable to read wheezy_udp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:37.971: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:37.975: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:37.979: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:38.023: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:38.026: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:38.029: INFO: Unable to read jessie_udp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:38.038: INFO: Unable to read jessie_tcp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:38.044: INFO: Unable to read jessie_udp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:38.047: INFO: Unable to read jessie_tcp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:38.049: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:38.053: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:38.071: INFO: Lookups using dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2314 wheezy_tcp@dns-test-service.dns-2314 wheezy_udp@dns-test-service.dns-2314.svc wheezy_tcp@dns-test-service.dns-2314.svc wheezy_udp@_http._tcp.dns-test-service.dns-2314.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2314.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2314 jessie_tcp@dns-test-service.dns-2314 jessie_udp@dns-test-service.dns-2314.svc jessie_tcp@dns-test-service.dns-2314.svc jessie_udp@_http._tcp.dns-test-service.dns-2314.svc jessie_tcp@_http._tcp.dns-test-service.dns-2314.svc] Feb 18 16:25:42.950: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:42.954: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:42.957: INFO: Unable to read wheezy_udp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:42.961: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:42.965: INFO: Unable to read wheezy_udp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:42.968: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:42.973: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:42.975: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:43.019: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:43.023: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:43.027: INFO: Unable to read jessie_udp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:43.030: INFO: Unable to read jessie_tcp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:43.034: INFO: Unable to read jessie_udp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:43.037: INFO: Unable to read jessie_tcp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:43.039: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:43.042: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:43.078: INFO: Lookups using dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2314 wheezy_tcp@dns-test-service.dns-2314 wheezy_udp@dns-test-service.dns-2314.svc wheezy_tcp@dns-test-service.dns-2314.svc wheezy_udp@_http._tcp.dns-test-service.dns-2314.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2314.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2314 jessie_tcp@dns-test-service.dns-2314 jessie_udp@dns-test-service.dns-2314.svc jessie_tcp@dns-test-service.dns-2314.svc jessie_udp@_http._tcp.dns-test-service.dns-2314.svc jessie_tcp@_http._tcp.dns-test-service.dns-2314.svc] Feb 18 16:25:47.956: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:47.965: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:47.974: INFO: Unable to read wheezy_udp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:47.982: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:47.990: INFO: Unable to read wheezy_udp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:47.997: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:48.002: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:48.006: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:48.049: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:48.055: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:48.059: INFO: Unable to read jessie_udp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:48.063: INFO: Unable to read jessie_tcp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:48.066: INFO: Unable to read jessie_udp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:48.069: INFO: Unable to read jessie_tcp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:48.073: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:48.076: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:48.093: INFO: Lookups using dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2314 wheezy_tcp@dns-test-service.dns-2314 wheezy_udp@dns-test-service.dns-2314.svc wheezy_tcp@dns-test-service.dns-2314.svc wheezy_udp@_http._tcp.dns-test-service.dns-2314.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2314.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2314 jessie_tcp@dns-test-service.dns-2314 jessie_udp@dns-test-service.dns-2314.svc jessie_tcp@dns-test-service.dns-2314.svc jessie_udp@_http._tcp.dns-test-service.dns-2314.svc jessie_tcp@_http._tcp.dns-test-service.dns-2314.svc] Feb 18 16:25:52.956: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:52.975: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:52.983: INFO: Unable to read wheezy_udp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:52.989: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:52.995: INFO: Unable to read wheezy_udp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:52.999: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:53.004: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:53.008: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:53.058: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:53.064: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:53.069: INFO: Unable to read jessie_udp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:53.074: INFO: Unable to read jessie_tcp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:53.079: INFO: Unable to read jessie_udp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:53.085: INFO: Unable to read jessie_tcp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:53.089: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:53.094: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:53.134: INFO: Lookups using dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2314 wheezy_tcp@dns-test-service.dns-2314 wheezy_udp@dns-test-service.dns-2314.svc wheezy_tcp@dns-test-service.dns-2314.svc wheezy_udp@_http._tcp.dns-test-service.dns-2314.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2314.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2314 jessie_tcp@dns-test-service.dns-2314 jessie_udp@dns-test-service.dns-2314.svc jessie_tcp@dns-test-service.dns-2314.svc jessie_udp@_http._tcp.dns-test-service.dns-2314.svc jessie_tcp@_http._tcp.dns-test-service.dns-2314.svc] Feb 18 16:25:57.950: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:57.954: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:57.987: INFO: Unable to read wheezy_udp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:57.992: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:57.997: INFO: Unable to read wheezy_udp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:58.002: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:58.007: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:58.011: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:58.059: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:58.062: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:58.066: INFO: Unable to read jessie_udp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:58.070: INFO: Unable to read jessie_tcp@dns-test-service.dns-2314 from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:58.075: INFO: Unable to read jessie_udp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:58.079: INFO: Unable to read jessie_tcp@dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:58.085: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:58.089: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2314.svc from pod dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5: the server could not find the requested resource (get pods dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5) Feb 18 16:25:58.112: INFO: Lookups using dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2314 wheezy_tcp@dns-test-service.dns-2314 wheezy_udp@dns-test-service.dns-2314.svc wheezy_tcp@dns-test-service.dns-2314.svc wheezy_udp@_http._tcp.dns-test-service.dns-2314.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2314.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2314 jessie_tcp@dns-test-service.dns-2314 jessie_udp@dns-test-service.dns-2314.svc jessie_tcp@dns-test-service.dns-2314.svc jessie_udp@_http._tcp.dns-test-service.dns-2314.svc jessie_tcp@_http._tcp.dns-test-service.dns-2314.svc] Feb 18 16:26:03.647: INFO: DNS probes using dns-2314/dns-test-9c325607-19d5-4851-9ce5-a987cfd8c0b5 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:26:04.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2314" for this suite. • [SLOW TEST:42.062 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":280,"completed":92,"skipped":1186,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:26:04.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-bkn7k in namespace proxy-70 I0218 16:26:04.810325 9 runners.go:189] Created replication controller with name: proxy-service-bkn7k, namespace: proxy-70, replica count: 1 I0218 16:26:05.861521 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:26:06.862145 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:26:07.862787 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:26:08.863552 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:26:09.864033 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:26:10.864499 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:26:11.865059 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:26:12.866157 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:26:13.867247 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0218 16:26:14.867807 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0218 16:26:15.868363 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0218 16:26:16.869224 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0218 16:26:17.870153 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0218 16:26:18.870973 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0218 16:26:19.871451 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0218 16:26:20.872301 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0218 16:26:21.872979 9 runners.go:189] proxy-service-bkn7k Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 18 16:26:21.880: INFO: setup took 17.193958219s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 18 16:26:21.904: INFO: (0) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:1080/proxy/: te... (200; 22.95604ms) Feb 18 16:26:21.904: INFO: (0) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 23.086915ms) Feb 18 16:26:21.904: INFO: (0) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:1080/proxy/: testtest (200; 33.921631ms) Feb 18 16:26:21.915: INFO: (0) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 35.18668ms) Feb 18 16:26:21.915: INFO: (0) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 34.271822ms) Feb 18 16:26:21.916: INFO: (0) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 35.098155ms) Feb 18 16:26:21.923: INFO: (0) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:462/proxy/: tls qux (200; 41.902406ms) Feb 18 16:26:21.923: INFO: (0) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname2/proxy/: tls qux (200; 42.754317ms) Feb 18 16:26:21.924: INFO: (0) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:443/proxy/: test (200; 13.570545ms) Feb 18 16:26:21.938: INFO: (1) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:1080/proxy/: te... (200; 13.927074ms) Feb 18 16:26:21.939: INFO: (1) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:460/proxy/: tls baz (200; 14.456537ms) Feb 18 16:26:21.939: INFO: (1) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:1080/proxy/: testte... (200; 12.722192ms) Feb 18 16:26:21.960: INFO: (2) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 14.605944ms) Feb 18 16:26:21.965: INFO: (2) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:460/proxy/: tls baz (200; 19.303312ms) Feb 18 16:26:21.965: INFO: (2) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:1080/proxy/: testtest (200; 19.552377ms) Feb 18 16:26:21.965: INFO: (2) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname2/proxy/: tls qux (200; 19.640734ms) Feb 18 16:26:21.965: INFO: (2) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:462/proxy/: tls qux (200; 19.60829ms) Feb 18 16:26:21.965: INFO: (2) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname2/proxy/: bar (200; 19.725025ms) Feb 18 16:26:21.965: INFO: (2) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname1/proxy/: foo (200; 19.910893ms) Feb 18 16:26:21.965: INFO: (2) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname1/proxy/: foo (200; 19.90087ms) Feb 18 16:26:21.966: INFO: (2) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname1/proxy/: tls baz (200; 20.363879ms) Feb 18 16:26:21.978: INFO: (3) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:460/proxy/: tls baz (200; 11.497032ms) Feb 18 16:26:21.978: INFO: (3) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 12.146633ms) Feb 18 16:26:21.978: INFO: (3) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:1080/proxy/: testte... (200; 21.505674ms) Feb 18 16:26:21.988: INFO: (3) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk/proxy/: test (200; 22.081986ms) Feb 18 16:26:21.990: INFO: (3) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:462/proxy/: tls qux (200; 23.620159ms) Feb 18 16:26:21.990: INFO: (3) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname1/proxy/: tls baz (200; 24.080118ms) Feb 18 16:26:21.990: INFO: (3) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 23.817497ms) Feb 18 16:26:21.990: INFO: (3) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname2/proxy/: bar (200; 23.82893ms) Feb 18 16:26:21.993: INFO: (3) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname1/proxy/: foo (200; 27.127696ms) Feb 18 16:26:22.003: INFO: (4) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname1/proxy/: foo (200; 9.213098ms) Feb 18 16:26:22.015: INFO: (4) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 21.327795ms) Feb 18 16:26:22.015: INFO: (4) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:460/proxy/: tls baz (200; 21.494325ms) Feb 18 16:26:22.016: INFO: (4) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 22.112109ms) Feb 18 16:26:22.018: INFO: (4) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:443/proxy/: testtest (200; 24.525182ms) Feb 18 16:26:22.029: INFO: (4) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname1/proxy/: tls baz (200; 34.830118ms) Feb 18 16:26:22.029: INFO: (4) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:462/proxy/: tls qux (200; 35.38315ms) Feb 18 16:26:22.029: INFO: (4) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 35.189233ms) Feb 18 16:26:22.029: INFO: (4) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname2/proxy/: bar (200; 35.270112ms) Feb 18 16:26:22.029: INFO: (4) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname2/proxy/: tls qux (200; 35.287614ms) Feb 18 16:26:22.029: INFO: (4) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname1/proxy/: foo (200; 35.395034ms) Feb 18 16:26:22.029: INFO: (4) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:1080/proxy/: te... (200; 35.229205ms) Feb 18 16:26:22.031: INFO: (4) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname2/proxy/: bar (200; 37.820176ms) Feb 18 16:26:22.046: INFO: (5) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 13.432359ms) Feb 18 16:26:22.046: INFO: (5) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:462/proxy/: tls qux (200; 13.459036ms) Feb 18 16:26:22.046: INFO: (5) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname2/proxy/: bar (200; 13.885711ms) Feb 18 16:26:22.046: INFO: (5) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname1/proxy/: tls baz (200; 14.636883ms) Feb 18 16:26:22.046: INFO: (5) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname2/proxy/: bar (200; 13.894473ms) Feb 18 16:26:22.047: INFO: (5) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 14.766523ms) Feb 18 16:26:22.049: INFO: (5) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:443/proxy/: test (200; 15.850219ms) Feb 18 16:26:22.049: INFO: (5) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 16.111052ms) Feb 18 16:26:22.049: INFO: (5) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname1/proxy/: foo (200; 16.023202ms) Feb 18 16:26:22.049: INFO: (5) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 16.393529ms) Feb 18 16:26:22.049: INFO: (5) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:1080/proxy/: testte... (200; 16.784839ms) Feb 18 16:26:22.049: INFO: (5) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname1/proxy/: foo (200; 16.440564ms) Feb 18 16:26:22.051: INFO: (5) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:460/proxy/: tls baz (200; 17.766878ms) Feb 18 16:26:22.051: INFO: (5) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname2/proxy/: tls qux (200; 18.952172ms) Feb 18 16:26:22.060: INFO: (6) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 8.268856ms) Feb 18 16:26:22.060: INFO: (6) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 7.910969ms) Feb 18 16:26:22.063: INFO: (6) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:443/proxy/: test (200; 13.122302ms) Feb 18 16:26:22.065: INFO: (6) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:1080/proxy/: te... (200; 13.292751ms) Feb 18 16:26:22.065: INFO: (6) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:1080/proxy/: testtest (200; 16.262124ms) Feb 18 16:26:22.084: INFO: (7) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 16.367112ms) Feb 18 16:26:22.084: INFO: (7) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 16.370665ms) Feb 18 16:26:22.084: INFO: (7) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname2/proxy/: tls qux (200; 16.267948ms) Feb 18 16:26:22.084: INFO: (7) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:1080/proxy/: te... (200; 16.227793ms) Feb 18 16:26:22.085: INFO: (7) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname2/proxy/: bar (200; 17.360314ms) Feb 18 16:26:22.085: INFO: (7) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:460/proxy/: tls baz (200; 17.494686ms) Feb 18 16:26:22.087: INFO: (7) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:1080/proxy/: testtest (200; 9.951106ms) Feb 18 16:26:22.098: INFO: (8) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 10.09955ms) Feb 18 16:26:22.098: INFO: (8) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:443/proxy/: testte... (200; 19.51906ms) Feb 18 16:26:22.107: INFO: (8) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:460/proxy/: tls baz (200; 19.410946ms) Feb 18 16:26:22.107: INFO: (8) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 19.518309ms) Feb 18 16:26:22.108: INFO: (8) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname1/proxy/: foo (200; 19.80283ms) Feb 18 16:26:22.108: INFO: (8) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 20.065551ms) Feb 18 16:26:22.112: INFO: (9) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 4.51186ms) Feb 18 16:26:22.114: INFO: (9) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:443/proxy/: testte... (200; 13.157529ms) Feb 18 16:26:22.122: INFO: (9) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk/proxy/: test (200; 12.917685ms) Feb 18 16:26:22.122: INFO: (9) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname1/proxy/: foo (200; 12.976728ms) Feb 18 16:26:22.122: INFO: (9) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 13.036069ms) Feb 18 16:26:22.123: INFO: (9) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname1/proxy/: foo (200; 14.033568ms) Feb 18 16:26:22.123: INFO: (9) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:460/proxy/: tls baz (200; 14.042193ms) Feb 18 16:26:22.123: INFO: (9) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 13.894183ms) Feb 18 16:26:22.123: INFO: (9) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 14.603981ms) Feb 18 16:26:22.133: INFO: (10) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk/proxy/: test (200; 9.340762ms) Feb 18 16:26:22.133: INFO: (10) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 9.543409ms) Feb 18 16:26:22.135: INFO: (10) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:462/proxy/: tls qux (200; 10.727725ms) Feb 18 16:26:22.135: INFO: (10) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 10.73844ms) Feb 18 16:26:22.135: INFO: (10) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:460/proxy/: tls baz (200; 10.688058ms) Feb 18 16:26:22.135: INFO: (10) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 11.140793ms) Feb 18 16:26:22.136: INFO: (10) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 11.858776ms) Feb 18 16:26:22.139: INFO: (10) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:1080/proxy/: te... (200; 14.819795ms) Feb 18 16:26:22.139: INFO: (10) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname2/proxy/: bar (200; 15.302409ms) Feb 18 16:26:22.140: INFO: (10) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:1080/proxy/: testtesttest (200; 15.244896ms) Feb 18 16:26:22.158: INFO: (11) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 15.103007ms) Feb 18 16:26:22.159: INFO: (11) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 16.452482ms) Feb 18 16:26:22.161: INFO: (11) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:1080/proxy/: te... (200; 17.509567ms) Feb 18 16:26:22.161: INFO: (11) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:443/proxy/: test (200; 8.262036ms) Feb 18 16:26:22.188: INFO: (12) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 8.598354ms) Feb 18 16:26:22.190: INFO: (12) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:1080/proxy/: testte... (200; 14.278919ms) Feb 18 16:26:22.195: INFO: (12) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname1/proxy/: foo (200; 15.052471ms) Feb 18 16:26:22.195: INFO: (12) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname1/proxy/: foo (200; 15.277483ms) Feb 18 16:26:22.195: INFO: (12) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname2/proxy/: bar (200; 15.528666ms) Feb 18 16:26:22.195: INFO: (12) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:462/proxy/: tls qux (200; 15.751468ms) Feb 18 16:26:22.196: INFO: (12) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 16.041507ms) Feb 18 16:26:22.196: INFO: (12) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 16.121545ms) Feb 18 16:26:22.196: INFO: (12) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 16.198309ms) Feb 18 16:26:22.202: INFO: (13) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:1080/proxy/: testtest (200; 6.979167ms) Feb 18 16:26:22.207: INFO: (13) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 8.823696ms) Feb 18 16:26:22.207: INFO: (13) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 9.816892ms) Feb 18 16:26:22.207: INFO: (13) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 10.019965ms) Feb 18 16:26:22.207: INFO: (13) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 9.66543ms) Feb 18 16:26:22.208: INFO: (13) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:443/proxy/: te... (200; 12.257154ms) Feb 18 16:26:22.210: INFO: (13) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname1/proxy/: tls baz (200; 12.933714ms) Feb 18 16:26:22.210: INFO: (13) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:460/proxy/: tls baz (200; 12.614915ms) Feb 18 16:26:22.217: INFO: (14) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 6.770405ms) Feb 18 16:26:22.218: INFO: (14) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 7.466653ms) Feb 18 16:26:22.219: INFO: (14) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:443/proxy/: test (200; 26.688474ms) Feb 18 16:26:22.237: INFO: (14) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 26.511023ms) Feb 18 16:26:22.238: INFO: (14) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname1/proxy/: foo (200; 27.207304ms) Feb 18 16:26:22.238: INFO: (14) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname1/proxy/: tls baz (200; 27.147117ms) Feb 18 16:26:22.238: INFO: (14) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname2/proxy/: bar (200; 27.284154ms) Feb 18 16:26:22.238: INFO: (14) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname1/proxy/: foo (200; 27.346436ms) Feb 18 16:26:22.238: INFO: (14) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname2/proxy/: tls qux (200; 27.569063ms) Feb 18 16:26:22.244: INFO: (14) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:1080/proxy/: testte... (200; 34.430064ms) Feb 18 16:26:22.245: INFO: (14) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:460/proxy/: tls baz (200; 34.789426ms) Feb 18 16:26:22.265: INFO: (15) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname2/proxy/: tls qux (200; 19.601971ms) Feb 18 16:26:22.265: INFO: (15) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname2/proxy/: bar (200; 19.65803ms) Feb 18 16:26:22.265: INFO: (15) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:1080/proxy/: te... (200; 19.624442ms) Feb 18 16:26:22.266: INFO: (15) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname1/proxy/: foo (200; 19.729958ms) Feb 18 16:26:22.266: INFO: (15) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname1/proxy/: foo (200; 20.246606ms) Feb 18 16:26:22.267: INFO: (15) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname2/proxy/: bar (200; 21.287629ms) Feb 18 16:26:22.267: INFO: (15) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname1/proxy/: tls baz (200; 21.347501ms) Feb 18 16:26:22.267: INFO: (15) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:1080/proxy/: testtest (200; 23.182321ms) Feb 18 16:26:22.269: INFO: (15) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 22.838546ms) Feb 18 16:26:22.269: INFO: (15) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:443/proxy/: test (200; 12.265564ms) Feb 18 16:26:22.284: INFO: (16) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:1080/proxy/: te... (200; 13.933231ms) Feb 18 16:26:22.292: INFO: (16) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:443/proxy/: testtest (200; 19.73862ms) Feb 18 16:26:22.325: INFO: (17) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:1080/proxy/: testte... (200; 21.122773ms) Feb 18 16:26:22.327: INFO: (17) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:462/proxy/: tls qux (200; 21.230413ms) Feb 18 16:26:22.327: INFO: (17) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 21.451533ms) Feb 18 16:26:22.327: INFO: (17) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:460/proxy/: tls baz (200; 21.906089ms) Feb 18 16:26:22.328: INFO: (17) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname2/proxy/: bar (200; 22.580167ms) Feb 18 16:26:22.329: INFO: (17) /api/v1/namespaces/proxy-70/services/proxy-service-bkn7k:portname1/proxy/: foo (200; 23.239958ms) Feb 18 16:26:22.329: INFO: (17) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname1/proxy/: tls baz (200; 23.279366ms) Feb 18 16:26:22.329: INFO: (17) /api/v1/namespaces/proxy-70/services/https:proxy-service-bkn7k:tlsportname2/proxy/: tls qux (200; 23.362584ms) Feb 18 16:26:22.329: INFO: (17) /api/v1/namespaces/proxy-70/services/http:proxy-service-bkn7k:portname2/proxy/: bar (200; 23.441572ms) Feb 18 16:26:22.342: INFO: (18) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 12.512378ms) Feb 18 16:26:22.342: INFO: (18) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 12.624854ms) Feb 18 16:26:22.342: INFO: (18) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:460/proxy/: tls baz (200; 12.786399ms) Feb 18 16:26:22.342: INFO: (18) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk/proxy/: test (200; 12.83929ms) Feb 18 16:26:22.342: INFO: (18) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:1080/proxy/: te... (200; 12.65827ms) Feb 18 16:26:22.342: INFO: (18) /api/v1/namespaces/proxy-70/pods/proxy-service-bkn7k-m8rxk:160/proxy/: foo (200; 12.825669ms) Feb 18 16:26:22.342: INFO: (18) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:443/proxy/: testtestte... (200; 17.019509ms) Feb 18 16:26:22.363: INFO: (19) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:460/proxy/: tls baz (200; 17.073395ms) Feb 18 16:26:22.363: INFO: (19) /api/v1/namespaces/proxy-70/pods/http:proxy-service-bkn7k-m8rxk:162/proxy/: bar (200; 17.109694ms) Feb 18 16:26:22.363: INFO: (19) /api/v1/namespaces/proxy-70/pods/https:proxy-service-bkn7k-m8rxk:443/proxy/: test (200; 22.382028ms) STEP: deleting ReplicationController proxy-service-bkn7k in namespace proxy-70, will wait for the garbage collector to delete the pods Feb 18 16:26:22.434: INFO: Deleting ReplicationController proxy-service-bkn7k took: 12.354708ms Feb 18 16:26:22.835: INFO: Terminating ReplicationController proxy-service-bkn7k pods took: 401.394566ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:26:32.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-70" for this suite. • [SLOW TEST:27.878 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":280,"completed":93,"skipped":1228,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:26:32.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating server pod server in namespace prestop-1999 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1999 STEP: Deleting pre-stop pod Feb 18 16:26:55.723: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:26:55.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1999" for this suite. • [SLOW TEST:23.530 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":280,"completed":94,"skipped":1239,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:26:55.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 18 16:26:56.175: INFO: Waiting up to 5m0s for pod "downward-api-13bef029-c77f-4448-8b40-47da1fd1bd45" in namespace "downward-api-9728" to be "success or failure" Feb 18 16:26:56.215: INFO: Pod "downward-api-13bef029-c77f-4448-8b40-47da1fd1bd45": Phase="Pending", Reason="", readiness=false. Elapsed: 40.213326ms Feb 18 16:26:58.222: INFO: Pod "downward-api-13bef029-c77f-4448-8b40-47da1fd1bd45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046782531s Feb 18 16:27:00.229: INFO: Pod "downward-api-13bef029-c77f-4448-8b40-47da1fd1bd45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053701852s Feb 18 16:27:02.240: INFO: Pod "downward-api-13bef029-c77f-4448-8b40-47da1fd1bd45": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06514559s Feb 18 16:27:04.247: INFO: Pod "downward-api-13bef029-c77f-4448-8b40-47da1fd1bd45": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071487741s Feb 18 16:27:06.768: INFO: Pod "downward-api-13bef029-c77f-4448-8b40-47da1fd1bd45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.592658951s STEP: Saw pod success Feb 18 16:27:06.768: INFO: Pod "downward-api-13bef029-c77f-4448-8b40-47da1fd1bd45" satisfied condition "success or failure" Feb 18 16:27:06.780: INFO: Trying to get logs from node jerma-node pod downward-api-13bef029-c77f-4448-8b40-47da1fd1bd45 container dapi-container: STEP: delete the pod Feb 18 16:27:06.951: INFO: Waiting for pod downward-api-13bef029-c77f-4448-8b40-47da1fd1bd45 to disappear Feb 18 16:27:06.957: INFO: Pod downward-api-13bef029-c77f-4448-8b40-47da1fd1bd45 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:27:06.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9728" for this suite. • [SLOW TEST:11.077 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":280,"completed":95,"skipped":1244,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:27:06.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 18 16:27:07.377: INFO: Waiting up to 5m0s for pod "pod-3749a038-96cb-439c-bf9b-00336c4c8964" in namespace "emptydir-4835" to be "success or failure" Feb 18 16:27:07.410: INFO: Pod "pod-3749a038-96cb-439c-bf9b-00336c4c8964": Phase="Pending", Reason="", readiness=false. Elapsed: 33.083892ms Feb 18 16:27:09.421: INFO: Pod "pod-3749a038-96cb-439c-bf9b-00336c4c8964": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044107798s Feb 18 16:27:11.430: INFO: Pod "pod-3749a038-96cb-439c-bf9b-00336c4c8964": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053122323s Feb 18 16:27:13.441: INFO: Pod "pod-3749a038-96cb-439c-bf9b-00336c4c8964": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064354822s Feb 18 16:27:15.449: INFO: Pod "pod-3749a038-96cb-439c-bf9b-00336c4c8964": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071898517s STEP: Saw pod success Feb 18 16:27:15.449: INFO: Pod "pod-3749a038-96cb-439c-bf9b-00336c4c8964" satisfied condition "success or failure" Feb 18 16:27:15.453: INFO: Trying to get logs from node jerma-node pod pod-3749a038-96cb-439c-bf9b-00336c4c8964 container test-container: STEP: delete the pod Feb 18 16:27:15.495: INFO: Waiting for pod pod-3749a038-96cb-439c-bf9b-00336c4c8964 to disappear Feb 18 16:27:15.516: INFO: Pod pod-3749a038-96cb-439c-bf9b-00336c4c8964 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:27:15.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4835" for this suite. • [SLOW TEST:8.573 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":96,"skipped":1247,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:27:15.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-projected-qdxl STEP: Creating a pod to test atomic-volume-subpath Feb 18 16:27:15.809: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qdxl" in namespace "subpath-5105" to be "success or failure" Feb 18 16:27:15.854: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Pending", Reason="", readiness=false. Elapsed: 44.024347ms Feb 18 16:27:17.862: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052598879s Feb 18 16:27:19.881: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070851143s Feb 18 16:27:21.889: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079692859s Feb 18 16:27:23.913: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103611687s Feb 18 16:27:25.923: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Running", Reason="", readiness=true. Elapsed: 10.112837476s Feb 18 16:27:27.930: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Running", Reason="", readiness=true. Elapsed: 12.119881241s Feb 18 16:27:29.934: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Running", Reason="", readiness=true. Elapsed: 14.124488057s Feb 18 16:27:31.945: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Running", Reason="", readiness=true. Elapsed: 16.134940583s Feb 18 16:27:33.957: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Running", Reason="", readiness=true. Elapsed: 18.14755504s Feb 18 16:27:36.002: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Running", Reason="", readiness=true. Elapsed: 20.191714464s Feb 18 16:27:38.013: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Running", Reason="", readiness=true. Elapsed: 22.203185182s Feb 18 16:27:40.033: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Running", Reason="", readiness=true. Elapsed: 24.22354206s Feb 18 16:27:42.041: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Running", Reason="", readiness=true. Elapsed: 26.231450539s Feb 18 16:27:44.048: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Running", Reason="", readiness=true. Elapsed: 28.238565927s Feb 18 16:27:46.058: INFO: Pod "pod-subpath-test-projected-qdxl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.247841044s STEP: Saw pod success Feb 18 16:27:46.058: INFO: Pod "pod-subpath-test-projected-qdxl" satisfied condition "success or failure" Feb 18 16:27:46.065: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-qdxl container test-container-subpath-projected-qdxl: STEP: delete the pod Feb 18 16:27:46.117: INFO: Waiting for pod pod-subpath-test-projected-qdxl to disappear Feb 18 16:27:46.147: INFO: Pod pod-subpath-test-projected-qdxl no longer exists STEP: Deleting pod pod-subpath-test-projected-qdxl Feb 18 16:27:46.147: INFO: Deleting pod "pod-subpath-test-projected-qdxl" in namespace "subpath-5105" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:27:46.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5105" for this suite. • [SLOW TEST:30.759 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":280,"completed":97,"skipped":1253,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:27:46.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:27:46.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6655" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":280,"completed":98,"skipped":1262,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:27:46.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7020 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7020 I0218 16:27:46.946477 9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7020, replica count: 2 I0218 16:27:49.998185 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:27:52.998749 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:27:55.999152 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:27:58.999714 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 18 16:27:58.999: INFO: Creating new exec pod Feb 18 16:28:08.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7020 execpod82ngc -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Feb 18 16:28:08.490: INFO: stderr: "I0218 16:28:08.286942 1250 log.go:172] (0xc0003ef760) (0xc000a7c6e0) Create stream\nI0218 16:28:08.287112 1250 log.go:172] (0xc0003ef760) (0xc000a7c6e0) Stream added, broadcasting: 1\nI0218 16:28:08.295431 1250 log.go:172] (0xc0003ef760) Reply frame received for 1\nI0218 16:28:08.295482 1250 log.go:172] (0xc0003ef760) (0xc000a7c000) Create stream\nI0218 16:28:08.295495 1250 log.go:172] (0xc0003ef760) (0xc000a7c000) Stream added, broadcasting: 3\nI0218 16:28:08.297386 1250 log.go:172] (0xc0003ef760) Reply frame received for 3\nI0218 16:28:08.297519 1250 log.go:172] (0xc0003ef760) (0xc000a7c0a0) Create stream\nI0218 16:28:08.297535 1250 log.go:172] (0xc0003ef760) (0xc000a7c0a0) Stream added, broadcasting: 5\nI0218 16:28:08.299977 1250 log.go:172] (0xc0003ef760) Reply frame received for 5\nI0218 16:28:08.377598 1250 log.go:172] (0xc0003ef760) Data frame received for 5\nI0218 16:28:08.377667 1250 log.go:172] (0xc000a7c0a0) (5) Data frame handling\nI0218 16:28:08.377689 1250 log.go:172] (0xc000a7c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0218 16:28:08.386297 1250 log.go:172] (0xc0003ef760) Data frame received for 5\nI0218 16:28:08.386324 1250 log.go:172] (0xc000a7c0a0) (5) Data frame handling\nI0218 16:28:08.386342 1250 log.go:172] (0xc000a7c0a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0218 16:28:08.476689 1250 log.go:172] (0xc0003ef760) Data frame received for 1\nI0218 16:28:08.476847 1250 log.go:172] (0xc0003ef760) (0xc000a7c000) Stream removed, broadcasting: 3\nI0218 16:28:08.476915 1250 log.go:172] (0xc000a7c6e0) (1) Data frame handling\nI0218 16:28:08.476938 1250 log.go:172] (0xc000a7c6e0) (1) Data frame sent\nI0218 16:28:08.476954 1250 log.go:172] (0xc0003ef760) (0xc000a7c0a0) Stream removed, broadcasting: 5\nI0218 16:28:08.477164 1250 log.go:172] (0xc0003ef760) (0xc000a7c6e0) Stream removed, broadcasting: 1\nI0218 16:28:08.477337 1250 log.go:172] (0xc0003ef760) Go away received\nI0218 16:28:08.478112 1250 log.go:172] (0xc0003ef760) (0xc000a7c6e0) Stream removed, broadcasting: 1\nI0218 16:28:08.478128 1250 log.go:172] (0xc0003ef760) (0xc000a7c000) Stream removed, broadcasting: 3\nI0218 16:28:08.478142 1250 log.go:172] (0xc0003ef760) (0xc000a7c0a0) Stream removed, broadcasting: 5\n" Feb 18 16:28:08.491: INFO: stdout: "" Feb 18 16:28:08.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7020 execpod82ngc -- /bin/sh -x -c nc -zv -t -w 2 10.96.231.171 80' Feb 18 16:28:09.028: INFO: stderr: "I0218 16:28:08.817193 1270 log.go:172] (0xc0003d6e70) (0xc00076e000) Create stream\nI0218 16:28:08.817331 1270 log.go:172] (0xc0003d6e70) (0xc00076e000) Stream added, broadcasting: 1\nI0218 16:28:08.823991 1270 log.go:172] (0xc0003d6e70) Reply frame received for 1\nI0218 16:28:08.824083 1270 log.go:172] (0xc0003d6e70) (0xc00076e140) Create stream\nI0218 16:28:08.824098 1270 log.go:172] (0xc0003d6e70) (0xc00076e140) Stream added, broadcasting: 3\nI0218 16:28:08.825934 1270 log.go:172] (0xc0003d6e70) Reply frame received for 3\nI0218 16:28:08.825974 1270 log.go:172] (0xc0003d6e70) (0xc0005b1b80) Create stream\nI0218 16:28:08.825995 1270 log.go:172] (0xc0003d6e70) (0xc0005b1b80) Stream added, broadcasting: 5\nI0218 16:28:08.827631 1270 log.go:172] (0xc0003d6e70) Reply frame received for 5\nI0218 16:28:08.899506 1270 log.go:172] (0xc0003d6e70) Data frame received for 5\nI0218 16:28:08.899572 1270 log.go:172] (0xc0005b1b80) (5) Data frame handling\nI0218 16:28:08.899615 1270 log.go:172] (0xc0005b1b80) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.231.171 80\nI0218 16:28:08.900209 1270 log.go:172] (0xc0003d6e70) Data frame received for 5\nI0218 16:28:08.900226 1270 log.go:172] (0xc0005b1b80) (5) Data frame handling\nI0218 16:28:08.900244 1270 log.go:172] (0xc0005b1b80) (5) Data frame sent\nConnection to 10.96.231.171 80 port [tcp/http] succeeded!\nI0218 16:28:09.010055 1270 log.go:172] (0xc0003d6e70) Data frame received for 1\nI0218 16:28:09.010204 1270 log.go:172] (0xc0003d6e70) (0xc0005b1b80) Stream removed, broadcasting: 5\nI0218 16:28:09.010319 1270 log.go:172] (0xc00076e000) (1) Data frame handling\nI0218 16:28:09.010346 1270 log.go:172] (0xc0003d6e70) (0xc00076e140) Stream removed, broadcasting: 3\nI0218 16:28:09.010376 1270 log.go:172] (0xc00076e000) (1) Data frame sent\nI0218 16:28:09.010390 1270 log.go:172] (0xc0003d6e70) (0xc00076e000) Stream removed, broadcasting: 1\nI0218 16:28:09.010406 1270 log.go:172] (0xc0003d6e70) Go away received\nI0218 16:28:09.011698 1270 log.go:172] (0xc0003d6e70) (0xc00076e000) Stream removed, broadcasting: 1\nI0218 16:28:09.011728 1270 log.go:172] (0xc0003d6e70) (0xc00076e140) Stream removed, broadcasting: 3\nI0218 16:28:09.011746 1270 log.go:172] (0xc0003d6e70) (0xc0005b1b80) Stream removed, broadcasting: 5\n" Feb 18 16:28:09.028: INFO: stdout: "" Feb 18 16:28:09.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7020 execpod82ngc -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30849' Feb 18 16:28:09.416: INFO: stderr: "I0218 16:28:09.223262 1292 log.go:172] (0xc000af18c0) (0xc000ab0780) Create stream\nI0218 16:28:09.223391 1292 log.go:172] (0xc000af18c0) (0xc000ab0780) Stream added, broadcasting: 1\nI0218 16:28:09.227341 1292 log.go:172] (0xc000af18c0) Reply frame received for 1\nI0218 16:28:09.227383 1292 log.go:172] (0xc000af18c0) (0xc000a84820) Create stream\nI0218 16:28:09.227391 1292 log.go:172] (0xc000af18c0) (0xc000a84820) Stream added, broadcasting: 3\nI0218 16:28:09.228614 1292 log.go:172] (0xc000af18c0) Reply frame received for 3\nI0218 16:28:09.228639 1292 log.go:172] (0xc000af18c0) (0xc00098c460) Create stream\nI0218 16:28:09.228647 1292 log.go:172] (0xc000af18c0) (0xc00098c460) Stream added, broadcasting: 5\nI0218 16:28:09.229723 1292 log.go:172] (0xc000af18c0) Reply frame received for 5\nI0218 16:28:09.322133 1292 log.go:172] (0xc000af18c0) Data frame received for 5\nI0218 16:28:09.322178 1292 log.go:172] (0xc00098c460) (5) Data frame handling\nI0218 16:28:09.322203 1292 log.go:172] (0xc00098c460) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30849\nI0218 16:28:09.323363 1292 log.go:172] (0xc000af18c0) Data frame received for 5\nI0218 16:28:09.323373 1292 log.go:172] (0xc00098c460) (5) Data frame handling\nI0218 16:28:09.323382 1292 log.go:172] (0xc00098c460) (5) Data frame sent\nConnection to 10.96.2.250 30849 port [tcp/30849] succeeded!\nI0218 16:28:09.402617 1292 log.go:172] (0xc000af18c0) (0xc00098c460) Stream removed, broadcasting: 5\nI0218 16:28:09.402956 1292 log.go:172] (0xc000af18c0) Data frame received for 1\nI0218 16:28:09.403006 1292 log.go:172] (0xc000af18c0) (0xc000a84820) Stream removed, broadcasting: 3\nI0218 16:28:09.403179 1292 log.go:172] (0xc000ab0780) (1) Data frame handling\nI0218 16:28:09.403222 1292 log.go:172] (0xc000ab0780) (1) Data frame sent\nI0218 16:28:09.403246 1292 log.go:172] (0xc000af18c0) (0xc000ab0780) Stream removed, broadcasting: 1\nI0218 16:28:09.403279 1292 log.go:172] (0xc000af18c0) Go away received\nI0218 16:28:09.404643 1292 log.go:172] (0xc000af18c0) (0xc000ab0780) Stream removed, broadcasting: 1\nI0218 16:28:09.404681 1292 log.go:172] (0xc000af18c0) (0xc000a84820) Stream removed, broadcasting: 3\nI0218 16:28:09.404695 1292 log.go:172] (0xc000af18c0) (0xc00098c460) Stream removed, broadcasting: 5\n" Feb 18 16:28:09.416: INFO: stdout: "" Feb 18 16:28:09.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7020 execpod82ngc -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30849' Feb 18 16:28:09.770: INFO: stderr: "I0218 16:28:09.606777 1314 log.go:172] (0xc000b52000) (0xc000912000) Create stream\nI0218 16:28:09.607055 1314 log.go:172] (0xc000b52000) (0xc000912000) Stream added, broadcasting: 1\nI0218 16:28:09.612369 1314 log.go:172] (0xc000b52000) Reply frame received for 1\nI0218 16:28:09.612434 1314 log.go:172] (0xc000b52000) (0xc0009120a0) Create stream\nI0218 16:28:09.612452 1314 log.go:172] (0xc000b52000) (0xc0009120a0) Stream added, broadcasting: 3\nI0218 16:28:09.615255 1314 log.go:172] (0xc000b52000) Reply frame received for 3\nI0218 16:28:09.615294 1314 log.go:172] (0xc000b52000) (0xc000912140) Create stream\nI0218 16:28:09.615308 1314 log.go:172] (0xc000b52000) (0xc000912140) Stream added, broadcasting: 5\nI0218 16:28:09.617735 1314 log.go:172] (0xc000b52000) Reply frame received for 5\nI0218 16:28:09.682820 1314 log.go:172] (0xc000b52000) Data frame received for 5\nI0218 16:28:09.682925 1314 log.go:172] (0xc000912140) (5) Data frame handling\nI0218 16:28:09.682983 1314 log.go:172] (0xc000912140) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30849\nI0218 16:28:09.683187 1314 log.go:172] (0xc000b52000) Data frame received for 5\nI0218 16:28:09.683198 1314 log.go:172] (0xc000912140) (5) Data frame handling\nI0218 16:28:09.683212 1314 log.go:172] (0xc000912140) (5) Data frame sent\nConnection to 10.96.1.234 30849 port [tcp/30849] succeeded!\nI0218 16:28:09.754468 1314 log.go:172] (0xc000b52000) Data frame received for 1\nI0218 16:28:09.754887 1314 log.go:172] (0xc000b52000) (0xc0009120a0) Stream removed, broadcasting: 3\nI0218 16:28:09.755017 1314 log.go:172] (0xc000912000) (1) Data frame handling\nI0218 16:28:09.755167 1314 log.go:172] (0xc000912000) (1) Data frame sent\nI0218 16:28:09.755258 1314 log.go:172] (0xc000b52000) (0xc000912140) Stream removed, broadcasting: 5\nI0218 16:28:09.755350 1314 log.go:172] (0xc000b52000) (0xc000912000) Stream removed, broadcasting: 1\nI0218 16:28:09.755404 1314 log.go:172] (0xc000b52000) Go away received\nI0218 16:28:09.756651 1314 log.go:172] (0xc000b52000) (0xc000912000) Stream removed, broadcasting: 1\nI0218 16:28:09.756684 1314 log.go:172] (0xc000b52000) (0xc0009120a0) Stream removed, broadcasting: 3\nI0218 16:28:09.756704 1314 log.go:172] (0xc000b52000) (0xc000912140) Stream removed, broadcasting: 5\n" Feb 18 16:28:09.770: INFO: stdout: "" Feb 18 16:28:09.770: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:28:09.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7020" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:23.260 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":280,"completed":99,"skipped":1265,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:28:09.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 18 16:28:10.024: INFO: Waiting up to 5m0s for pod "pod-3207337d-9f94-415a-92b7-c7c53ec121c0" in namespace "emptydir-7241" to be "success or failure" Feb 18 16:28:10.028: INFO: Pod "pod-3207337d-9f94-415a-92b7-c7c53ec121c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.697118ms Feb 18 16:28:12.037: INFO: Pod "pod-3207337d-9f94-415a-92b7-c7c53ec121c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01214023s Feb 18 16:28:14.047: INFO: Pod "pod-3207337d-9f94-415a-92b7-c7c53ec121c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022749462s Feb 18 16:28:16.096: INFO: Pod "pod-3207337d-9f94-415a-92b7-c7c53ec121c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071311161s Feb 18 16:28:19.442: INFO: Pod "pod-3207337d-9f94-415a-92b7-c7c53ec121c0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.41750213s Feb 18 16:28:21.747: INFO: Pod "pod-3207337d-9f94-415a-92b7-c7c53ec121c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.722129934s STEP: Saw pod success Feb 18 16:28:21.747: INFO: Pod "pod-3207337d-9f94-415a-92b7-c7c53ec121c0" satisfied condition "success or failure" Feb 18 16:28:22.227: INFO: Trying to get logs from node jerma-node pod pod-3207337d-9f94-415a-92b7-c7c53ec121c0 container test-container: STEP: delete the pod Feb 18 16:28:22.610: INFO: Waiting for pod pod-3207337d-9f94-415a-92b7-c7c53ec121c0 to disappear Feb 18 16:28:22.625: INFO: Pod pod-3207337d-9f94-415a-92b7-c7c53ec121c0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:28:22.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7241" for this suite. • [SLOW TEST:12.817 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":100,"skipped":1271,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:28:22.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name projected-secret-test-b4242579-fb4e-44a5-ac24-dd730828770c STEP: Creating a pod to test consume secrets Feb 18 16:28:23.091: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-40796db8-6090-459d-a0fb-1b63e5cc7c9f" in namespace "projected-3799" to be "success or failure" Feb 18 16:28:23.098: INFO: Pod "pod-projected-secrets-40796db8-6090-459d-a0fb-1b63e5cc7c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.079322ms Feb 18 16:28:25.108: INFO: Pod "pod-projected-secrets-40796db8-6090-459d-a0fb-1b63e5cc7c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016991419s Feb 18 16:28:27.114: INFO: Pod "pod-projected-secrets-40796db8-6090-459d-a0fb-1b63e5cc7c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022868687s Feb 18 16:28:29.122: INFO: Pod "pod-projected-secrets-40796db8-6090-459d-a0fb-1b63e5cc7c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030875087s Feb 18 16:28:31.206: INFO: Pod "pod-projected-secrets-40796db8-6090-459d-a0fb-1b63e5cc7c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115210911s Feb 18 16:28:33.214: INFO: Pod "pod-projected-secrets-40796db8-6090-459d-a0fb-1b63e5cc7c9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.123210629s STEP: Saw pod success Feb 18 16:28:33.215: INFO: Pod "pod-projected-secrets-40796db8-6090-459d-a0fb-1b63e5cc7c9f" satisfied condition "success or failure" Feb 18 16:28:33.218: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-40796db8-6090-459d-a0fb-1b63e5cc7c9f container secret-volume-test: STEP: delete the pod Feb 18 16:28:33.308: INFO: Waiting for pod pod-projected-secrets-40796db8-6090-459d-a0fb-1b63e5cc7c9f to disappear Feb 18 16:28:33.315: INFO: Pod pod-projected-secrets-40796db8-6090-459d-a0fb-1b63e5cc7c9f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:28:33.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3799" for this suite. • [SLOW TEST:10.682 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":101,"skipped":1276,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:28:33.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 18 16:28:34.167: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 18 16:28:36.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640114, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640114, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640114, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640114, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:28:38.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640114, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640114, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640114, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640114, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:28:40.203: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640114, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640114, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640114, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640114, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 16:28:43.285: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:28:43.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2261" for this suite. STEP: Destroying namespace "webhook-2261-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.127 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":280,"completed":102,"skipped":1320,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:28:43.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-6667 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 18 16:28:43.578: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 18 16:28:43.628: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 18 16:28:46.799: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 18 16:28:48.175: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 18 16:28:49.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 18 16:28:52.486: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 18 16:28:54.087: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 18 16:28:55.636: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 16:28:57.736: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 16:28:59.637: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 16:29:01.637: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 16:29:03.637: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 16:29:05.636: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 16:29:07.636: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 16:29:09.635: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 16:29:11.637: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 18 16:29:11.648: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 18 16:29:19.681: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-6667 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 16:29:19.681: INFO: >>> kubeConfig: /root/.kube/config I0218 16:29:19.724386 9 log.go:172] (0xc002ad6210) (0xc001cfb2c0) Create stream I0218 16:29:19.724585 9 log.go:172] (0xc002ad6210) (0xc001cfb2c0) Stream added, broadcasting: 1 I0218 16:29:19.728020 9 log.go:172] (0xc002ad6210) Reply frame received for 1 I0218 16:29:19.728046 9 log.go:172] (0xc002ad6210) (0xc00170cc80) Create stream I0218 16:29:19.728055 9 log.go:172] (0xc002ad6210) (0xc00170cc80) Stream added, broadcasting: 3 I0218 16:29:19.729121 9 log.go:172] (0xc002ad6210) Reply frame received for 3 I0218 16:29:19.729141 9 log.go:172] (0xc002ad6210) (0xc0028ee460) Create stream I0218 16:29:19.729152 9 log.go:172] (0xc002ad6210) (0xc0028ee460) Stream added, broadcasting: 5 I0218 16:29:19.730782 9 log.go:172] (0xc002ad6210) Reply frame received for 5 I0218 16:29:19.830836 9 log.go:172] (0xc002ad6210) Data frame received for 3 I0218 16:29:19.830909 9 log.go:172] (0xc00170cc80) (3) Data frame handling I0218 16:29:19.830931 9 log.go:172] (0xc00170cc80) (3) Data frame sent I0218 16:29:19.902574 9 log.go:172] (0xc002ad6210) Data frame received for 1 I0218 16:29:19.902641 9 log.go:172] (0xc001cfb2c0) (1) Data frame handling I0218 16:29:19.902652 9 log.go:172] (0xc001cfb2c0) (1) Data frame sent I0218 16:29:19.902780 9 log.go:172] (0xc002ad6210) (0xc001cfb2c0) Stream removed, broadcasting: 1 I0218 16:29:19.903156 9 log.go:172] (0xc002ad6210) (0xc00170cc80) Stream removed, broadcasting: 3 I0218 16:29:19.903438 9 log.go:172] (0xc002ad6210) (0xc0028ee460) Stream removed, broadcasting: 5 I0218 16:29:19.903462 9 log.go:172] (0xc002ad6210) (0xc001cfb2c0) Stream removed, broadcasting: 1 I0218 16:29:19.903472 9 log.go:172] (0xc002ad6210) (0xc00170cc80) Stream removed, broadcasting: 3 I0218 16:29:19.903477 9 log.go:172] (0xc002ad6210) (0xc0028ee460) Stream removed, broadcasting: 5 Feb 18 16:29:19.903: INFO: Waiting for responses: map[] Feb 18 16:29:19.913: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-6667 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 16:29:19.914: INFO: >>> kubeConfig: /root/.kube/config I0218 16:29:19.971226 9 log.go:172] (0xc001ed1340) (0xc00170d2c0) Create stream I0218 16:29:19.971566 9 log.go:172] (0xc001ed1340) (0xc00170d2c0) Stream added, broadcasting: 1 I0218 16:29:19.975623 9 log.go:172] (0xc001ed1340) Reply frame received for 1 I0218 16:29:19.975665 9 log.go:172] (0xc001ed1340) (0xc0026d8000) Create stream I0218 16:29:19.975672 9 log.go:172] (0xc001ed1340) (0xc0026d8000) Stream added, broadcasting: 3 I0218 16:29:19.976835 9 log.go:172] (0xc001ed1340) Reply frame received for 3 I0218 16:29:19.976857 9 log.go:172] (0xc001ed1340) (0xc0026d80a0) Create stream I0218 16:29:19.976864 9 log.go:172] (0xc001ed1340) (0xc0026d80a0) Stream added, broadcasting: 5 I0218 16:29:19.977632 9 log.go:172] (0xc001ed1340) Reply frame received for 5 I0218 16:29:20.052076 9 log.go:172] (0xc001ed1340) Data frame received for 3 I0218 16:29:20.052165 9 log.go:172] (0xc0026d8000) (3) Data frame handling I0218 16:29:20.052178 9 log.go:172] (0xc0026d8000) (3) Data frame sent I0218 16:29:20.142146 9 log.go:172] (0xc001ed1340) Data frame received for 1 I0218 16:29:20.142406 9 log.go:172] (0xc001ed1340) (0xc0026d80a0) Stream removed, broadcasting: 5 I0218 16:29:20.142519 9 log.go:172] (0xc001ed1340) (0xc0026d8000) Stream removed, broadcasting: 3 I0218 16:29:20.142595 9 log.go:172] (0xc00170d2c0) (1) Data frame handling I0218 16:29:20.142668 9 log.go:172] (0xc00170d2c0) (1) Data frame sent I0218 16:29:20.142678 9 log.go:172] (0xc001ed1340) (0xc00170d2c0) Stream removed, broadcasting: 1 I0218 16:29:20.142689 9 log.go:172] (0xc001ed1340) Go away received I0218 16:29:20.143046 9 log.go:172] (0xc001ed1340) (0xc00170d2c0) Stream removed, broadcasting: 1 I0218 16:29:20.143072 9 log.go:172] (0xc001ed1340) (0xc0026d8000) Stream removed, broadcasting: 3 I0218 16:29:20.143083 9 log.go:172] (0xc001ed1340) (0xc0026d80a0) Stream removed, broadcasting: 5 Feb 18 16:29:20.143: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:29:20.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6667" for this suite. • [SLOW TEST:36.693 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":280,"completed":103,"skipped":1326,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:29:20.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 18 16:29:20.250: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b164f7a-aab6-4835-9531-352a4ea4b81e" in namespace "downward-api-8979" to be "success or failure" Feb 18 16:29:20.300: INFO: Pod "downwardapi-volume-0b164f7a-aab6-4835-9531-352a4ea4b81e": Phase="Pending", Reason="", readiness=false. Elapsed: 50.150053ms Feb 18 16:29:22.308: INFO: Pod "downwardapi-volume-0b164f7a-aab6-4835-9531-352a4ea4b81e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057782579s Feb 18 16:29:24.320: INFO: Pod "downwardapi-volume-0b164f7a-aab6-4835-9531-352a4ea4b81e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069949477s Feb 18 16:29:26.765: INFO: Pod "downwardapi-volume-0b164f7a-aab6-4835-9531-352a4ea4b81e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.515015115s Feb 18 16:29:28.774: INFO: Pod "downwardapi-volume-0b164f7a-aab6-4835-9531-352a4ea4b81e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.524382912s STEP: Saw pod success Feb 18 16:29:28.775: INFO: Pod "downwardapi-volume-0b164f7a-aab6-4835-9531-352a4ea4b81e" satisfied condition "success or failure" Feb 18 16:29:28.779: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-0b164f7a-aab6-4835-9531-352a4ea4b81e container client-container: STEP: delete the pod Feb 18 16:29:28.836: INFO: Waiting for pod downwardapi-volume-0b164f7a-aab6-4835-9531-352a4ea4b81e to disappear Feb 18 16:29:28.841: INFO: Pod downwardapi-volume-0b164f7a-aab6-4835-9531-352a4ea4b81e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:29:28.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8979" for this suite. • [SLOW TEST:8.713 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":104,"skipped":1358,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:29:28.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 18 16:29:29.058: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 18 16:29:29.113: INFO: Waiting for terminating namespaces to be deleted... Feb 18 16:29:29.118: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 18 16:29:29.129: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 18 16:29:29.129: INFO: Container kube-proxy ready: true, restart count 0 Feb 18 16:29:29.129: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 18 16:29:29.129: INFO: Container weave ready: true, restart count 1 Feb 18 16:29:29.129: INFO: Container weave-npc ready: true, restart count 0 Feb 18 16:29:29.129: INFO: test-container-pod from pod-network-test-6667 started at 2020-02-18 16:29:11 +0000 UTC (1 container statuses recorded) Feb 18 16:29:29.129: INFO: Container webserver ready: true, restart count 0 Feb 18 16:29:29.129: INFO: netserver-0 from pod-network-test-6667 started at 2020-02-18 16:28:43 +0000 UTC (1 container statuses recorded) Feb 18 16:29:29.129: INFO: Container webserver ready: true, restart count 0 Feb 18 16:29:29.129: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 18 16:29:29.159: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 18 16:29:29.160: INFO: Container kube-scheduler ready: true, restart count 15 Feb 18 16:29:29.160: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 18 16:29:29.160: INFO: Container kube-apiserver ready: true, restart count 1 Feb 18 16:29:29.160: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 18 16:29:29.160: INFO: Container etcd ready: true, restart count 1 Feb 18 16:29:29.160: INFO: netserver-1 from pod-network-test-6667 started at 2020-02-18 16:28:43 +0000 UTC (1 container statuses recorded) Feb 18 16:29:29.160: INFO: Container webserver ready: true, restart count 0 Feb 18 16:29:29.160: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 18 16:29:29.160: INFO: Container coredns ready: true, restart count 0 Feb 18 16:29:29.160: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 18 16:29:29.160: INFO: Container coredns ready: true, restart count 0 Feb 18 16:29:29.160: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 18 16:29:29.160: INFO: Container kube-controller-manager ready: true, restart count 11 Feb 18 16:29:29.160: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 18 16:29:29.160: INFO: Container kube-proxy ready: true, restart count 0 Feb 18 16:29:29.160: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 18 16:29:29.160: INFO: Container weave ready: true, restart count 0 Feb 18 16:29:29.160: INFO: Container weave-npc ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-25c7a2e9-e5a3-4755-b7a2-a63947e45663 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-25c7a2e9-e5a3-4755-b7a2-a63947e45663 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-25c7a2e9-e5a3-4755-b7a2-a63947e45663 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:34:47.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2963" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:318.612 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":280,"completed":105,"skipped":1365,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:34:47.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:34:47.541: INFO: Creating deployment "webserver-deployment" Feb 18 16:34:47.547: INFO: Waiting for observed generation 1 Feb 18 16:34:50.853: INFO: Waiting for all required pods to come up Feb 18 16:34:51.172: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 18 16:35:19.203: INFO: Waiting for deployment "webserver-deployment" to complete Feb 18 16:35:19.218: INFO: Updating deployment "webserver-deployment" with a non-existent image Feb 18 16:35:19.238: INFO: Updating deployment webserver-deployment Feb 18 16:35:19.238: INFO: Waiting for observed generation 2 Feb 18 16:35:22.431: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 18 16:35:22.678: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 18 16:35:22.748: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 18 16:35:23.390: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 18 16:35:23.390: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 18 16:35:23.393: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 18 16:35:23.400: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Feb 18 16:35:23.400: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Feb 18 16:35:23.410: INFO: Updating deployment webserver-deployment Feb 18 16:35:23.410: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Feb 18 16:35:25.244: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 18 16:35:32.639: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Feb 18 16:35:39.372: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7236 /apis/apps/v1/namespaces/deployment-7236/deployments/webserver-deployment ad513ce7-90d6-4d24-8806-bd1508fc67a1 9210625 3 2020-02-18 16:34:47 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00304d598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-18 16:35:25 +0000 UTC,LastTransitionTime:2020-02-18 16:35:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-02-18 16:35:34 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Feb 18 16:35:43.857: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-7236 /apis/apps/v1/namespaces/deployment-7236/replicasets/webserver-deployment-c7997dcc8 d9c2e2a1-2a4d-4815-ab45-e1c6e2be065c 9210621 3 2020-02-18 16:35:19 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment ad513ce7-90d6-4d24-8806-bd1508fc67a1 0xc002ff7df7 0xc002ff7df8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ff7e68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 18 16:35:43.857: INFO: All old ReplicaSets of Deployment "webserver-deployment": Feb 18 16:35:43.857: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-7236 /apis/apps/v1/namespaces/deployment-7236/replicasets/webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 9210601 3 2020-02-18 16:34:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment ad513ce7-90d6-4d24-8806-bd1508fc67a1 0xc002ff7d37 0xc002ff7d38}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ff7d98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Feb 18 16:35:44.992: INFO: Pod "webserver-deployment-595b5b9587-2j594" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2j594 webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-2j594 8deedef3-43fc-4b00-a9b9-e6f555de2f0e 9210445 0 2020-02-18 16:34:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc002dbd747 0xc002dbd748}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-02-18 16:34:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 16:35:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://d01dfef1a395ed29b37c17b866530b627dd30dd33d477e573ecdb25009e06c68,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.993: INFO: Pod "webserver-deployment-595b5b9587-4nnl4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4nnl4 webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-4nnl4 27f2e2ee-87ef-4c02-ba0a-2b834a622748 9210586 0 2020-02-18 16:35:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc002dbd8c0 0xc002dbd8c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.993: INFO: Pod "webserver-deployment-595b5b9587-5fmjz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5fmjz webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-5fmjz f517a0d7-bc1c-44c1-9e39-189f133fb6c5 9210627 0 2020-02-18 16:35:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc002dbd9e7 0xc002dbd9e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-18 16:35:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.993: INFO: Pod "webserver-deployment-595b5b9587-7zvvg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7zvvg webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-7zvvg 1d9da18d-3340-475e-bb4e-4870c23d2233 9210569 0 2020-02-18 16:35:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc002dbdb47 0xc002dbdb48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.993: INFO: Pod "webserver-deployment-595b5b9587-85rds" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-85rds webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-85rds fe06aea3-9e5b-4095-b079-ce891772c162 9210402 0 2020-02-18 16:34:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc002dbdc87 0xc002dbdc88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-18 16:34:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 16:35:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://945213bb9c9ec52be700abe0f1668f918429cf4a48aea467fbcfd2936ba81db0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.994: INFO: Pod "webserver-deployment-595b5b9587-bjfhs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bjfhs webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-bjfhs 49bfab34-4d1c-442a-84cc-7837cf50e8e5 9210628 0 2020-02-18 16:35:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc002dbde00 0xc002dbde01}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-18 16:35:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.994: INFO: Pod "webserver-deployment-595b5b9587-bx2tn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bx2tn webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-bx2tn e95693f1-05d6-4911-8669-b6381e1e889f 9210589 0 2020-02-18 16:35:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc002dbdf47 0xc002dbdf48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-18 16:35:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.994: INFO: Pod "webserver-deployment-595b5b9587-h6hsg" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-h6hsg webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-h6hsg f81baf1a-9a17-4116-8680-7c5a51c9d71e 9210464 0 2020-02-18 16:34:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc0033520a7 0xc0033520a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-02-18 16:34:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 16:35:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b11f0347fd143bde4c073bbedbb32ece388bf8f9eef0ae079eda6a19233623b5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.994: INFO: Pod "webserver-deployment-595b5b9587-htdw6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-htdw6 webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-htdw6 2324530e-c576-4b11-b3a6-f2dccb3950df 9210593 0 2020-02-18 16:35:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc003352230 0xc003352231}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.995: INFO: Pod "webserver-deployment-595b5b9587-mrqwk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mrqwk webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-mrqwk 523e35bd-663e-4266-a1e4-3e216df4f719 9210578 0 2020-02-18 16:35:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc003352347 0xc003352348}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.995: INFO: Pod "webserver-deployment-595b5b9587-ngjck" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ngjck webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-ngjck c3db27b1-5bd9-46b2-8d7a-bf43d9661df4 9210641 0 2020-02-18 16:35:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc003352467 0xc003352468}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-18 16:35:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.995: INFO: Pod "webserver-deployment-595b5b9587-rln48" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rln48 webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-rln48 efc09a23-90dd-436b-909b-c4bfcf094eb6 9210635 0 2020-02-18 16:35:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc0033525b7 0xc0033525b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-18 16:35:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.996: INFO: Pod "webserver-deployment-595b5b9587-thxjk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-thxjk webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-thxjk 773148c8-51bd-4b53-b6dd-0d4b8cb3c257 9210440 0 2020-02-18 16:34:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc003352707 0xc003352708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-02-18 16:34:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 16:35:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://3c714e5f81f11b772e2c1aafe20a95d3edf7a4bb886cbb467c072dc945de1271,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.996: INFO: Pod "webserver-deployment-595b5b9587-v8s5c" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v8s5c webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-v8s5c 0e2e4feb-2123-4983-b8b4-e306446cbcd3 9210584 0 2020-02-18 16:35:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc003352880 0xc003352881}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.996: INFO: Pod "webserver-deployment-595b5b9587-vwht8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vwht8 webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-vwht8 1f059296-4abd-4d02-af66-1c05bebc2a1a 9210594 0 2020-02-18 16:35:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc003352997 0xc003352998}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-18 16:35:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.996: INFO: Pod "webserver-deployment-595b5b9587-w4xj8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w4xj8 webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-w4xj8 e716e59f-fb05-4aac-a817-3e000d016697 9210455 0 2020-02-18 16:34:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc003352ae7 0xc003352ae8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-02-18 16:34:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 16:35:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://52c16e4079ce83a3232280020e434bfa580e3e18f8b2d186188c8e66dcb60ef5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.996: INFO: Pod "webserver-deployment-595b5b9587-wlmpc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wlmpc webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-wlmpc b988022b-431f-4364-b9df-7b04ca73406f 9210429 0 2020-02-18 16:34:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc003352c50 0xc003352c51}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.6,StartTime:2020-02-18 16:34:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 16:35:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://a3d93cbcf9645e5851d923d7967fe31415fe3d53d7f43aeab14438f1d4f8a8a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.997: INFO: Pod "webserver-deployment-595b5b9587-x8tt9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-x8tt9 webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-x8tt9 28594ac6-9d6a-4a81-88e6-7b632a27e9c5 9210592 0 2020-02-18 16:35:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc003352dc0 0xc003352dc1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.997: INFO: Pod "webserver-deployment-595b5b9587-xcs6s" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xcs6s webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-xcs6s db35bda0-f2bd-4578-a28d-fbefdfe47a7a 9210443 0 2020-02-18 16:34:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc003352ec7 0xc003352ec8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-02-18 16:34:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 16:35:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://2a3453d7dad2b14d0c3d30729e60d25bedbe8151dc06afd26697951bc326fc2a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.997: INFO: Pod "webserver-deployment-595b5b9587-xjnms" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xjnms webserver-deployment-595b5b9587- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-595b5b9587-xjnms c7051641-c5e6-4058-bb29-58dc067efd22 9210461 0 2020-02-18 16:34:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 db2a1430-b5dd-47be-8c33-7d87cf08d11c 0xc003353040 0xc003353041}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:34:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-02-18 16:34:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 16:35:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://0623b77d145d948350b45da5cd3ff95a7f8ba31a5cdeede236f8abff0dfd2d71,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.998: INFO: Pod "webserver-deployment-c7997dcc8-2xf4n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2xf4n webserver-deployment-c7997dcc8- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-c7997dcc8-2xf4n 0ea260c1-876a-4ace-8a9e-17ba40e319b9 9210577 0 2020-02-18 16:35:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d9c2e2a1-2a4d-4815-ab45-e1c6e2be065c 0xc0033531a0 0xc0033531a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.998: INFO: Pod "webserver-deployment-c7997dcc8-59fcg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-59fcg webserver-deployment-c7997dcc8- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-c7997dcc8-59fcg 19a32ef8-ac50-4e08-885d-4f4e2689881c 9210530 0 2020-02-18 16:35:19 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d9c2e2a1-2a4d-4815-ab45-e1c6e2be065c 0xc0033532d7 0xc0033532d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-18 16:35:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.998: INFO: Pod "webserver-deployment-c7997dcc8-6486v" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6486v webserver-deployment-c7997dcc8- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-c7997dcc8-6486v acaa979f-832b-42b0-baa5-5eaa2772b8a4 9210532 0 2020-02-18 16:35:19 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d9c2e2a1-2a4d-4815-ab45-e1c6e2be065c 0xc003353447 0xc003353448}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-18 16:35:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.999: INFO: Pod "webserver-deployment-c7997dcc8-7nvn7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7nvn7 webserver-deployment-c7997dcc8- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-c7997dcc8-7nvn7 bc90c151-3ab8-4dd2-a8d6-578617f2cb94 9210634 0 2020-02-18 16:35:19 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d9c2e2a1-2a4d-4815-ab45-e1c6e2be065c 0xc0033535c7 0xc0033535c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-18 16:35:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login',},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.999: INFO: Pod "webserver-deployment-c7997dcc8-9fzc5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9fzc5 webserver-deployment-c7997dcc8- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-c7997dcc8-9fzc5 e5e55bfd-8445-4cb8-ad30-392fb0c18c0f 9210602 0 2020-02-18 16:35:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d9c2e2a1-2a4d-4815-ab45-e1c6e2be065c 0xc003353757 0xc003353758}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:44.999: INFO: Pod "webserver-deployment-c7997dcc8-b7ssm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b7ssm webserver-deployment-c7997dcc8- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-c7997dcc8-b7ssm 19276785-099d-47af-bda7-224f086ff60b 9210575 0 2020-02-18 16:35:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d9c2e2a1-2a4d-4815-ab45-e1c6e2be065c 0xc003353877 0xc003353878}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:45.000: INFO: Pod "webserver-deployment-c7997dcc8-hbjpl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hbjpl webserver-deployment-c7997dcc8- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-c7997dcc8-hbjpl 77cdca43-6e86-4778-bc43-32eba607af91 9210522 0 2020-02-18 16:35:19 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d9c2e2a1-2a4d-4815-ab45-e1c6e2be065c 0xc0033539a7 0xc0033539a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-18 16:35:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:45.000: INFO: Pod "webserver-deployment-c7997dcc8-mfqdt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mfqdt webserver-deployment-c7997dcc8- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-c7997dcc8-mfqdt 5bdca7ff-49d1-4d66-83e9-e47ceb84a7ff 9210571 0 2020-02-18 16:35:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d9c2e2a1-2a4d-4815-ab45-e1c6e2be065c 0xc003353b17 0xc003353b18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:45.000: INFO: Pod "webserver-deployment-c7997dcc8-nsdk5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nsdk5 webserver-deployment-c7997dcc8- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-c7997dcc8-nsdk5 6ca094ca-ab82-49a6-b07c-d89511385c4c 9210617 0 2020-02-18 16:35:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d9c2e2a1-2a4d-4815-ab45-e1c6e2be065c 0xc003353c37 0xc003353c38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:45.001: INFO: Pod "webserver-deployment-c7997dcc8-s5v9l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s5v9l webserver-deployment-c7997dcc8- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-c7997dcc8-s5v9l c4031636-284b-4fd3-8897-792d8c02bc4a 9210515 0 2020-02-18 16:35:19 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d9c2e2a1-2a4d-4815-ab45-e1c6e2be065c 0xc003353d67 0xc003353d68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-18 16:35:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:45.001: INFO: Pod "webserver-deployment-c7997dcc8-tm29t" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tm29t webserver-deployment-c7997dcc8- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-c7997dcc8-tm29t b9842af7-6e3e-4b1f-9c46-2a412ea5e61d 9210597 0 2020-02-18 16:35:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d9c2e2a1-2a4d-4815-ab45-e1c6e2be065c 0xc003353ee7 0xc003353ee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:45.001: INFO: Pod "webserver-deployment-c7997dcc8-w4ljl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w4ljl webserver-deployment-c7997dcc8- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-c7997dcc8-w4ljl ab06e15a-dab1-4753-a4d5-781c196ff36f 9210598 0 2020-02-18 16:35:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d9c2e2a1-2a4d-4815-ab45-e1c6e2be065c 0xc002fb8017 0xc002fb8018}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:35:45.002: INFO: Pod "webserver-deployment-c7997dcc8-w9gg8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w9gg8 webserver-deployment-c7997dcc8- deployment-7236 /api/v1/namespaces/deployment-7236/pods/webserver-deployment-c7997dcc8-w9gg8 2c4b81ab-c76b-451e-a751-e6b166cafe33 9210603 0 2020-02-18 16:35:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d9c2e2a1-2a4d-4815-ab45-e1c6e2be065c 0xc002fb8147 0xc002fb8148}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xndxv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xndxv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xndxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:35:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:35:45.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7236" for this suite. • [SLOW TEST:59.080 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":280,"completed":106,"skipped":1368,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:35:46.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 18 16:37:07.650: INFO: Waiting up to 5m0s for pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc" in namespace "emptydir-1807" to be "success or failure" Feb 18 16:37:07.660: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.434473ms Feb 18 16:37:09.667: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016575253s Feb 18 16:37:11.674: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023434019s Feb 18 16:37:15.620: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.969851687s Feb 18 16:37:19.205: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.554790912s Feb 18 16:37:21.368: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.717613458s Feb 18 16:37:25.723: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.072775616s Feb 18 16:37:29.164: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Pending", Reason="", readiness=false. Elapsed: 21.514197835s Feb 18 16:37:31.341: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Pending", Reason="", readiness=false. Elapsed: 23.69102022s Feb 18 16:37:33.501: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Pending", Reason="", readiness=false. Elapsed: 25.850348315s Feb 18 16:37:35.749: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.098643847s Feb 18 16:37:38.854: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Pending", Reason="", readiness=false. Elapsed: 31.203746776s Feb 18 16:37:41.445: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Pending", Reason="", readiness=false. Elapsed: 33.79484209s Feb 18 16:37:43.483: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Pending", Reason="", readiness=false. Elapsed: 35.832968937s Feb 18 16:37:45.548: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.897807817s STEP: Saw pod success Feb 18 16:37:45.548: INFO: Pod "pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc" satisfied condition "success or failure" Feb 18 16:37:45.552: INFO: Trying to get logs from node jerma-node pod pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc container test-container: STEP: delete the pod Feb 18 16:37:45.628: INFO: Waiting for pod pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc to disappear Feb 18 16:37:45.746: INFO: Pod pod-1b3573e0-7e16-4381-97bb-b5119a3f96dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:37:45.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1807" for this suite. • [SLOW TEST:119.190 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":107,"skipped":1391,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:37:45.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-90efd33d-48ea-4183-9846-16d858d3cbfd STEP: Creating secret with name s-test-opt-upd-3304aa57-c7ea-44df-a48d-2f6fcc16e495 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-90efd33d-48ea-4183-9846-16d858d3cbfd STEP: Updating secret s-test-opt-upd-3304aa57-c7ea-44df-a48d-2f6fcc16e495 STEP: Creating secret with name s-test-opt-create-0b79b886-c518-4032-bfe7-601c88bdf9a8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:38:04.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3236" for this suite. • [SLOW TEST:18.548 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":108,"skipped":1405,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:38:04.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 18 16:38:14.925: INFO: Successfully updated pod "pod-update-activedeadlineseconds-259a8796-7c3c-44ec-abd8-c3f801ed30ec" Feb 18 16:38:14.925: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-259a8796-7c3c-44ec-abd8-c3f801ed30ec" in namespace "pods-3127" to be "terminated due to deadline exceeded" Feb 18 16:38:14.933: INFO: Pod "pod-update-activedeadlineseconds-259a8796-7c3c-44ec-abd8-c3f801ed30ec": Phase="Running", Reason="", readiness=true. Elapsed: 7.28082ms Feb 18 16:38:16.937: INFO: Pod "pod-update-activedeadlineseconds-259a8796-7c3c-44ec-abd8-c3f801ed30ec": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.011812278s Feb 18 16:38:16.937: INFO: Pod "pod-update-activedeadlineseconds-259a8796-7c3c-44ec-abd8-c3f801ed30ec" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:38:16.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3127" for this suite. • [SLOW TEST:12.639 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":280,"completed":109,"skipped":1410,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:38:16.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-2615/configmap-test-bffc7efb-d6a1-4c49-98df-6d380a2bd531 STEP: Creating a pod to test consume configMaps Feb 18 16:38:17.476: INFO: Waiting up to 5m0s for pod "pod-configmaps-14f861a2-5f75-4da6-b6cf-ccb1ba2c7714" in namespace "configmap-2615" to be "success or failure" Feb 18 16:38:17.524: INFO: Pod "pod-configmaps-14f861a2-5f75-4da6-b6cf-ccb1ba2c7714": Phase="Pending", Reason="", readiness=false. Elapsed: 48.304936ms Feb 18 16:38:19.532: INFO: Pod "pod-configmaps-14f861a2-5f75-4da6-b6cf-ccb1ba2c7714": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056234423s Feb 18 16:38:21.539: INFO: Pod "pod-configmaps-14f861a2-5f75-4da6-b6cf-ccb1ba2c7714": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063240927s Feb 18 16:38:23.595: INFO: Pod "pod-configmaps-14f861a2-5f75-4da6-b6cf-ccb1ba2c7714": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119306241s Feb 18 16:38:25.604: INFO: Pod "pod-configmaps-14f861a2-5f75-4da6-b6cf-ccb1ba2c7714": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128115175s Feb 18 16:38:27.614: INFO: Pod "pod-configmaps-14f861a2-5f75-4da6-b6cf-ccb1ba2c7714": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.138019143s STEP: Saw pod success Feb 18 16:38:27.614: INFO: Pod "pod-configmaps-14f861a2-5f75-4da6-b6cf-ccb1ba2c7714" satisfied condition "success or failure" Feb 18 16:38:27.620: INFO: Trying to get logs from node jerma-node pod pod-configmaps-14f861a2-5f75-4da6-b6cf-ccb1ba2c7714 container env-test: STEP: delete the pod Feb 18 16:38:27.660: INFO: Waiting for pod pod-configmaps-14f861a2-5f75-4da6-b6cf-ccb1ba2c7714 to disappear Feb 18 16:38:27.683: INFO: Pod pod-configmaps-14f861a2-5f75-4da6-b6cf-ccb1ba2c7714 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:38:27.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2615" for this suite. • [SLOW TEST:10.757 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":110,"skipped":1412,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:38:27.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:38:27.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7673' Feb 18 16:38:31.659: INFO: stderr: "" Feb 18 16:38:31.659: INFO: stdout: "replicationcontroller/agnhost-master created\n" Feb 18 16:38:31.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7673' Feb 18 16:38:32.387: INFO: stderr: "" Feb 18 16:38:32.388: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 18 16:38:33.396: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 16:38:33.397: INFO: Found 0 / 1 Feb 18 16:38:34.395: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 16:38:34.396: INFO: Found 0 / 1 Feb 18 16:38:35.395: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 16:38:35.395: INFO: Found 0 / 1 Feb 18 16:38:36.396: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 16:38:36.396: INFO: Found 0 / 1 Feb 18 16:38:37.396: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 16:38:37.397: INFO: Found 0 / 1 Feb 18 16:38:38.398: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 16:38:38.398: INFO: Found 0 / 1 Feb 18 16:38:39.396: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 16:38:39.396: INFO: Found 1 / 1 Feb 18 16:38:39.397: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 18 16:38:39.402: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 16:38:39.402: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 18 16:38:39.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-wl7z2 --namespace=kubectl-7673' Feb 18 16:38:39.583: INFO: stderr: "" Feb 18 16:38:39.583: INFO: stdout: "Name: agnhost-master-wl7z2\nNamespace: kubectl-7673\nPriority: 0\nNode: jerma-node/10.96.2.250\nStart Time: Tue, 18 Feb 2020 16:38:31 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nIPs:\n IP: 10.44.0.1\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: docker://c8d5dd144afb28ccd78849d704b6e478517f1bb120f8db3f636fd1f894d798ca\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 18 Feb 2020 16:38:38 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-2x42t (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-2x42t:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-2x42t\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-7673/agnhost-master-wl7z2 to jerma-node\n Normal Pulled 4s kubelet, jerma-node Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-node Created container agnhost-master\n Normal Started 1s kubelet, jerma-node Started container agnhost-master\n" Feb 18 16:38:39.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7673' Feb 18 16:38:39.732: INFO: stderr: "" Feb 18 16:38:39.732: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7673\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: agnhost-master-wl7z2\n" Feb 18 16:38:39.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7673' Feb 18 16:38:39.883: INFO: stderr: "" Feb 18 16:38:39.883: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7673\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.59.183\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Feb 18 16:38:39.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node' Feb 18 16:38:40.180: INFO: stderr: "" Feb 18 16:38:40.180: INFO: stdout: "Name: jerma-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 04 Jan 2020 11:59:52 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: jerma-node\n AcquireTime: \n RenewTime: Tue, 18 Feb 2020 16:38:37 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 04 Jan 2020 12:00:49 +0000 Sat, 04 Jan 2020 12:00:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Tue, 18 Feb 2020 16:36:25 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 18 Feb 2020 16:36:25 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 18 Feb 2020 16:36:25 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 18 Feb 2020 16:36:25 +0000 Sat, 04 Jan 2020 12:00:52 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.2.250\n Hostname: jerma-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: bdc16344252549dd902c3a5d68b22f41\n System UUID: BDC16344-2525-49DD-902C-3A5D68B22F41\n Boot ID: eec61fc4-8bf6-487f-8f93-ea9731fe757a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-dsf66 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45d\n kube-system weave-net-kz8lv 20m (0%) 0 (0%) 0 (0%) 0 (0%) 45d\n kubectl-7673 agnhost-master-wl7z2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Feb 18 16:38:40.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7673' Feb 18 16:38:40.274: INFO: stderr: "" Feb 18 16:38:40.274: INFO: stdout: "Name: kubectl-7673\nLabels: e2e-framework=kubectl\n e2e-run=b868a767-b5fa-4ab2-9d3c-da88877e1d20\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:38:40.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7673" for this suite. • [SLOW TEST:12.580 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1156 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":280,"completed":111,"skipped":1436,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:38:40.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-14c44542-b5d9-47a9-8e2f-51ec31f168c5 STEP: Creating a pod to test consume configMaps Feb 18 16:38:40.419: INFO: Waiting up to 5m0s for pod "pod-configmaps-7902e8df-b870-4c21-ac63-1d0d402540bc" in namespace "configmap-8881" to be "success or failure" Feb 18 16:38:40.425: INFO: Pod "pod-configmaps-7902e8df-b870-4c21-ac63-1d0d402540bc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.818988ms Feb 18 16:38:42.434: INFO: Pod "pod-configmaps-7902e8df-b870-4c21-ac63-1d0d402540bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014660072s Feb 18 16:38:44.442: INFO: Pod "pod-configmaps-7902e8df-b870-4c21-ac63-1d0d402540bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022753469s Feb 18 16:38:46.460: INFO: Pod "pod-configmaps-7902e8df-b870-4c21-ac63-1d0d402540bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041061644s Feb 18 16:38:48.518: INFO: Pod "pod-configmaps-7902e8df-b870-4c21-ac63-1d0d402540bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098710701s Feb 18 16:38:50.539: INFO: Pod "pod-configmaps-7902e8df-b870-4c21-ac63-1d0d402540bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.12021971s STEP: Saw pod success Feb 18 16:38:50.540: INFO: Pod "pod-configmaps-7902e8df-b870-4c21-ac63-1d0d402540bc" satisfied condition "success or failure" Feb 18 16:38:50.567: INFO: Trying to get logs from node jerma-node pod pod-configmaps-7902e8df-b870-4c21-ac63-1d0d402540bc container configmap-volume-test: STEP: delete the pod Feb 18 16:38:50.708: INFO: Waiting for pod pod-configmaps-7902e8df-b870-4c21-ac63-1d0d402540bc to disappear Feb 18 16:38:50.726: INFO: Pod pod-configmaps-7902e8df-b870-4c21-ac63-1d0d402540bc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:38:50.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8881" for this suite. • [SLOW TEST:10.455 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":112,"skipped":1439,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:38:50.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with configMap that has name projected-configmap-test-upd-15737f34-b0a9-4776-9251-506933bb7448 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-15737f34-b0a9-4776-9251-506933bb7448 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:40:26.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6309" for this suite. • [SLOW TEST:96.087 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":113,"skipped":1459,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:40:26.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 18 16:40:27.334: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 18 16:40:29.355: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:40:31.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:40:33.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:40:35.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640827, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 16:40:38.406: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:40:38.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7152" for this suite. STEP: Destroying namespace "webhook-7152-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:11.872 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":280,"completed":114,"skipped":1465,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:40:38.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1502 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1502 I0218 16:40:39.005379 9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1502, replica count: 2 I0218 16:40:42.056261 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:40:45.057071 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:40:48.057851 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:40:51.058298 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:40:54.059426 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 18 16:40:54.059: INFO: Creating new exec pod Feb 18 16:41:03.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1502 execpodsqwf9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Feb 18 16:41:03.492: INFO: stderr: "I0218 16:41:03.270117 1491 log.go:172] (0xc000546f20) (0xc0004fdf40) Create stream\nI0218 16:41:03.270220 1491 log.go:172] (0xc000546f20) (0xc0004fdf40) Stream added, broadcasting: 1\nI0218 16:41:03.276035 1491 log.go:172] (0xc000546f20) Reply frame received for 1\nI0218 16:41:03.276123 1491 log.go:172] (0xc000546f20) (0xc0002be820) Create stream\nI0218 16:41:03.276137 1491 log.go:172] (0xc000546f20) (0xc0002be820) Stream added, broadcasting: 3\nI0218 16:41:03.278448 1491 log.go:172] (0xc000546f20) Reply frame received for 3\nI0218 16:41:03.278485 1491 log.go:172] (0xc000546f20) (0xc0006a6000) Create stream\nI0218 16:41:03.278500 1491 log.go:172] (0xc000546f20) (0xc0006a6000) Stream added, broadcasting: 5\nI0218 16:41:03.280848 1491 log.go:172] (0xc000546f20) Reply frame received for 5\nI0218 16:41:03.386442 1491 log.go:172] (0xc000546f20) Data frame received for 5\nI0218 16:41:03.386512 1491 log.go:172] (0xc0006a6000) (5) Data frame handling\nI0218 16:41:03.386537 1491 log.go:172] (0xc0006a6000) (5) Data frame sent\nI0218 16:41:03.386562 1491 log.go:172] (0xc000546f20) Data frame received for 5\nI0218 16:41:03.386568 1491 log.go:172] (0xc0006a6000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nI0218 16:41:03.386608 1491 log.go:172] (0xc0006a6000) (5) Data frame sent\nI0218 16:41:03.405963 1491 log.go:172] (0xc000546f20) Data frame received for 5\nI0218 16:41:03.406008 1491 log.go:172] (0xc0006a6000) (5) Data frame handling\nI0218 16:41:03.406025 1491 log.go:172] (0xc0006a6000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0218 16:41:03.486993 1491 log.go:172] (0xc000546f20) Data frame received for 1\nI0218 16:41:03.487087 1491 log.go:172] (0xc0004fdf40) (1) Data frame handling\nI0218 16:41:03.487107 1491 log.go:172] (0xc0004fdf40) (1) Data frame sent\nI0218 16:41:03.487148 1491 log.go:172] (0xc000546f20) (0xc0002be820) Stream removed, broadcasting: 3\nI0218 16:41:03.487169 1491 log.go:172] (0xc000546f20) (0xc0006a6000) Stream removed, broadcasting: 5\nI0218 16:41:03.487181 1491 log.go:172] (0xc000546f20) (0xc0004fdf40) Stream removed, broadcasting: 1\nI0218 16:41:03.487207 1491 log.go:172] (0xc000546f20) Go away received\nI0218 16:41:03.487839 1491 log.go:172] (0xc000546f20) (0xc0004fdf40) Stream removed, broadcasting: 1\nI0218 16:41:03.487852 1491 log.go:172] (0xc000546f20) (0xc0002be820) Stream removed, broadcasting: 3\nI0218 16:41:03.487859 1491 log.go:172] (0xc000546f20) (0xc0006a6000) Stream removed, broadcasting: 5\n" Feb 18 16:41:03.492: INFO: stdout: "" Feb 18 16:41:03.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1502 execpodsqwf9 -- /bin/sh -x -c nc -zv -t -w 2 10.96.35.15 80' Feb 18 16:41:03.947: INFO: stderr: "I0218 16:41:03.697090 1511 log.go:172] (0xc000b5eb00) (0xc00066db80) Create stream\nI0218 16:41:03.697239 1511 log.go:172] (0xc000b5eb00) (0xc00066db80) Stream added, broadcasting: 1\nI0218 16:41:03.703002 1511 log.go:172] (0xc000b5eb00) Reply frame received for 1\nI0218 16:41:03.703097 1511 log.go:172] (0xc000b5eb00) (0xc00066dd60) Create stream\nI0218 16:41:03.703109 1511 log.go:172] (0xc000b5eb00) (0xc00066dd60) Stream added, broadcasting: 3\nI0218 16:41:03.705760 1511 log.go:172] (0xc000b5eb00) Reply frame received for 3\nI0218 16:41:03.705783 1511 log.go:172] (0xc000b5eb00) (0xc00066de00) Create stream\nI0218 16:41:03.705792 1511 log.go:172] (0xc000b5eb00) (0xc00066de00) Stream added, broadcasting: 5\nI0218 16:41:03.707838 1511 log.go:172] (0xc000b5eb00) Reply frame received for 5\nI0218 16:41:03.796150 1511 log.go:172] (0xc000b5eb00) Data frame received for 5\nI0218 16:41:03.796343 1511 log.go:172] (0xc00066de00) (5) Data frame handling\nI0218 16:41:03.796437 1511 log.go:172] (0xc00066de00) (5) Data frame sent\nI0218 16:41:03.796469 1511 log.go:172] (0xc000b5eb00) Data frame received for 5\nI0218 16:41:03.796491 1511 log.go:172] (0xc00066de00) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.35.15 80\nConnection to 10.96.35.15 80 port [tcp/http] succeeded!\nI0218 16:41:03.796628 1511 log.go:172] (0xc00066de00) (5) Data frame sent\nI0218 16:41:03.932340 1511 log.go:172] (0xc000b5eb00) (0xc00066dd60) Stream removed, broadcasting: 3\nI0218 16:41:03.932890 1511 log.go:172] (0xc000b5eb00) Data frame received for 1\nI0218 16:41:03.932914 1511 log.go:172] (0xc00066db80) (1) Data frame handling\nI0218 16:41:03.932962 1511 log.go:172] (0xc00066db80) (1) Data frame sent\nI0218 16:41:03.932979 1511 log.go:172] (0xc000b5eb00) (0xc00066db80) Stream removed, broadcasting: 1\nI0218 16:41:03.934302 1511 log.go:172] (0xc000b5eb00) (0xc00066de00) Stream removed, broadcasting: 5\nI0218 16:41:03.934827 1511 log.go:172] (0xc000b5eb00) Go away received\nI0218 16:41:03.935164 1511 log.go:172] (0xc000b5eb00) (0xc00066db80) Stream removed, broadcasting: 1\nI0218 16:41:03.935214 1511 log.go:172] (0xc000b5eb00) (0xc00066dd60) Stream removed, broadcasting: 3\nI0218 16:41:03.935227 1511 log.go:172] (0xc000b5eb00) (0xc00066de00) Stream removed, broadcasting: 5\n" Feb 18 16:41:03.948: INFO: stdout: "" Feb 18 16:41:03.948: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:41:04.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1502" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:25.332 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":280,"completed":115,"skipped":1494,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:41:04.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-e153addf-8fff-4ea2-a92f-d0cd8c15e39e STEP: Creating a pod to test consume secrets Feb 18 16:41:04.175: INFO: Waiting up to 5m0s for pod "pod-secrets-f674d878-7591-4fc6-b655-2dd19bff5a90" in namespace "secrets-4012" to be "success or failure" Feb 18 16:41:04.180: INFO: Pod "pod-secrets-f674d878-7591-4fc6-b655-2dd19bff5a90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172826ms Feb 18 16:41:06.190: INFO: Pod "pod-secrets-f674d878-7591-4fc6-b655-2dd19bff5a90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014839181s Feb 18 16:41:08.199: INFO: Pod "pod-secrets-f674d878-7591-4fc6-b655-2dd19bff5a90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023467157s Feb 18 16:41:10.245: INFO: Pod "pod-secrets-f674d878-7591-4fc6-b655-2dd19bff5a90": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069795498s Feb 18 16:41:12.252: INFO: Pod "pod-secrets-f674d878-7591-4fc6-b655-2dd19bff5a90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076740019s STEP: Saw pod success Feb 18 16:41:12.252: INFO: Pod "pod-secrets-f674d878-7591-4fc6-b655-2dd19bff5a90" satisfied condition "success or failure" Feb 18 16:41:12.256: INFO: Trying to get logs from node jerma-node pod pod-secrets-f674d878-7591-4fc6-b655-2dd19bff5a90 container secret-volume-test: STEP: delete the pod Feb 18 16:41:13.893: INFO: Waiting for pod pod-secrets-f674d878-7591-4fc6-b655-2dd19bff5a90 to disappear Feb 18 16:41:14.214: INFO: Pod pod-secrets-f674d878-7591-4fc6-b655-2dd19bff5a90 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:41:14.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4012" for this suite. • [SLOW TEST:10.259 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":116,"skipped":1500,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:41:14.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1899 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 18 16:41:14.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1987' Feb 18 16:41:14.627: INFO: stderr: "" Feb 18 16:41:14.627: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Feb 18 16:41:24.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1987 -o json' Feb 18 16:41:24.840: INFO: stderr: "" Feb 18 16:41:24.841: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-18T16:41:14Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1987\",\n \"resourceVersion\": \"9211940\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1987/pods/e2e-test-httpd-pod\",\n \"uid\": \"0dacc3a9-7a16-47be-b88d-b8719685f8e2\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-76vnx\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-76vnx\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-76vnx\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-18T16:41:14Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-18T16:41:23Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-18T16:41:23Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-18T16:41:14Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://8d5d3396416c54cc3bb80ec6eecdf84a0eb29b1fe2d6757283b8ec7c38b08400\",\n \"image\": \"httpd:2.4.38-alpine\",\n \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-18T16:41:22Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.2.250\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"podIPs\": [\n {\n \"ip\": \"10.44.0.1\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-18T16:41:14Z\"\n }\n}\n" STEP: replace the image in the pod Feb 18 16:41:24.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1987' Feb 18 16:41:25.360: INFO: stderr: "" Feb 18 16:41:25.360: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1904 Feb 18 16:41:25.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1987' Feb 18 16:41:31.284: INFO: stderr: "" Feb 18 16:41:31.284: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:41:31.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1987" for this suite. • [SLOW TEST:17.085 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":280,"completed":117,"skipped":1502,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:41:31.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Feb 18 16:41:40.024: INFO: Successfully updated pod "labelsupdatebba6d91c-3f6c-482d-82f4-a21835a31dea" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:41:42.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2712" for this suite. • [SLOW TEST:10.738 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":118,"skipped":1524,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:41:42.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-d4e3f258-bfac-472a-9e0a-da2356257fdf STEP: Creating a pod to test consume configMaps Feb 18 16:41:42.272: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f8051895-358a-4347-ac2a-fba691da8748" in namespace "projected-5982" to be "success or failure" Feb 18 16:41:42.278: INFO: Pod "pod-projected-configmaps-f8051895-358a-4347-ac2a-fba691da8748": Phase="Pending", Reason="", readiness=false. Elapsed: 5.158747ms Feb 18 16:41:44.283: INFO: Pod "pod-projected-configmaps-f8051895-358a-4347-ac2a-fba691da8748": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010079523s Feb 18 16:41:46.288: INFO: Pod "pod-projected-configmaps-f8051895-358a-4347-ac2a-fba691da8748": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015161114s Feb 18 16:41:48.581: INFO: Pod "pod-projected-configmaps-f8051895-358a-4347-ac2a-fba691da8748": Phase="Pending", Reason="", readiness=false. Elapsed: 6.308838439s Feb 18 16:41:50.597: INFO: Pod "pod-projected-configmaps-f8051895-358a-4347-ac2a-fba691da8748": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.324537084s STEP: Saw pod success Feb 18 16:41:50.598: INFO: Pod "pod-projected-configmaps-f8051895-358a-4347-ac2a-fba691da8748" satisfied condition "success or failure" Feb 18 16:41:50.602: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-f8051895-358a-4347-ac2a-fba691da8748 container projected-configmap-volume-test: STEP: delete the pod Feb 18 16:41:50.723: INFO: Waiting for pod pod-projected-configmaps-f8051895-358a-4347-ac2a-fba691da8748 to disappear Feb 18 16:41:50.728: INFO: Pod pod-projected-configmaps-f8051895-358a-4347-ac2a-fba691da8748 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:41:50.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5982" for this suite. • [SLOW TEST:8.641 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":119,"skipped":1528,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:41:50.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 18 16:41:51.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6039' Feb 18 16:41:51.146: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 18 16:41:51.146: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Feb 18 16:41:51.258: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-fbm2s] Feb 18 16:41:51.259: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-fbm2s" in namespace "kubectl-6039" to be "running and ready" Feb 18 16:41:51.262: INFO: Pod "e2e-test-httpd-rc-fbm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 3.705351ms Feb 18 16:41:53.272: INFO: Pod "e2e-test-httpd-rc-fbm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013591469s Feb 18 16:41:55.280: INFO: Pod "e2e-test-httpd-rc-fbm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021580995s Feb 18 16:41:57.289: INFO: Pod "e2e-test-httpd-rc-fbm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030528064s Feb 18 16:41:59.295: INFO: Pod "e2e-test-httpd-rc-fbm2s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03631814s Feb 18 16:42:01.307: INFO: Pod "e2e-test-httpd-rc-fbm2s": Phase="Running", Reason="", readiness=true. Elapsed: 10.048211087s Feb 18 16:42:01.307: INFO: Pod "e2e-test-httpd-rc-fbm2s" satisfied condition "running and ready" Feb 18 16:42:01.307: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-fbm2s] Feb 18 16:42:01.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-6039' Feb 18 16:42:01.509: INFO: stderr: "" Feb 18 16:42:01.509: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Tue Feb 18 16:41:59.153716 2020] [mpm_event:notice] [pid 1:tid 139786913786728] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue Feb 18 16:41:59.153798 2020] [core:notice] [pid 1:tid 139786913786728] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1639 Feb 18 16:42:01.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6039' Feb 18 16:42:01.686: INFO: stderr: "" Feb 18 16:42:01.686: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:42:01.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6039" for this suite. • [SLOW TEST:10.936 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1630 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":280,"completed":120,"skipped":1551,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:42:01.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:42:18.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1795" for this suite. • [SLOW TEST:16.380 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":280,"completed":121,"skipped":1583,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:42:18.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-798 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-798 STEP: Creating statefulset with conflicting port in namespace statefulset-798 STEP: Waiting until pod test-pod will start running in namespace statefulset-798 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-798 Feb 18 16:42:30.429: INFO: Observed stateful pod in namespace: statefulset-798, name: ss-0, uid: b6b44786-14a8-466f-a82e-b7eeb97b1c3e, status phase: Pending. Waiting for statefulset controller to delete. Feb 18 16:42:33.118: INFO: Observed stateful pod in namespace: statefulset-798, name: ss-0, uid: b6b44786-14a8-466f-a82e-b7eeb97b1c3e, status phase: Failed. Waiting for statefulset controller to delete. Feb 18 16:42:33.140: INFO: Observed stateful pod in namespace: statefulset-798, name: ss-0, uid: b6b44786-14a8-466f-a82e-b7eeb97b1c3e, status phase: Failed. Waiting for statefulset controller to delete. Feb 18 16:42:33.235: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-798 STEP: Removing pod with conflicting port in namespace statefulset-798 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-798 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 18 16:42:51.419: INFO: Deleting all statefulset in ns statefulset-798 Feb 18 16:42:51.433: INFO: Scaling statefulset ss to 0 Feb 18 16:43:01.464: INFO: Waiting for statefulset status.replicas updated to 0 Feb 18 16:43:01.469: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:43:01.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-798" for this suite. • [SLOW TEST:43.446 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":280,"completed":122,"skipped":1586,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:43:01.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:43:01.631: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 18 16:43:06.647: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 18 16:43:08.665: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 18 16:43:10.676: INFO: Creating deployment "test-rollover-deployment" Feb 18 16:43:10.706: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 18 16:43:12.716: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 18 16:43:12.723: INFO: Ensure that both replica sets have 1 created replica Feb 18 16:43:12.731: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 18 16:43:12.744: INFO: Updating deployment test-rollover-deployment Feb 18 16:43:12.744: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 18 16:43:14.775: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 18 16:43:14.786: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 18 16:43:14.795: INFO: all replica sets need to contain the pod-template-hash label Feb 18 16:43:14.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640992, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:43:17.221: INFO: all replica sets need to contain the pod-template-hash label Feb 18 16:43:17.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640992, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:43:18.818: INFO: all replica sets need to contain the pod-template-hash label Feb 18 16:43:18.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640992, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:43:20.960: INFO: all replica sets need to contain the pod-template-hash label Feb 18 16:43:20.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640992, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:43:22.809: INFO: all replica sets need to contain the pod-template-hash label Feb 18 16:43:22.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641002, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:43:24.806: INFO: all replica sets need to contain the pod-template-hash label Feb 18 16:43:24.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641002, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:43:26.804: INFO: all replica sets need to contain the pod-template-hash label Feb 18 16:43:26.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641002, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:43:28.808: INFO: all replica sets need to contain the pod-template-hash label Feb 18 16:43:28.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641002, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:43:30.803: INFO: all replica sets need to contain the pod-template-hash label Feb 18 16:43:30.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641002, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:43:33.057: INFO: Feb 18 16:43:33.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641012, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717640990, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:43:34.813: INFO: Feb 18 16:43:34.813: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Feb 18 16:43:34.822: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-3670 /apis/apps/v1/namespaces/deployment-3670/deployments/test-rollover-deployment 120fa28b-bff2-4550-a7ee-d260672ba949 9212587 2 2020-02-18 16:43:10 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002fd6598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-18 16:43:10 +0000 UTC,LastTransitionTime:2020-02-18 16:43:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-02-18 16:43:33 +0000 UTC,LastTransitionTime:2020-02-18 16:43:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 18 16:43:34.825: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-3670 /apis/apps/v1/namespaces/deployment-3670/replicasets/test-rollover-deployment-574d6dfbff 8338a766-af73-4285-b3e6-4d7a9c100175 9212576 2 2020-02-18 16:43:12 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 120fa28b-bff2-4550-a7ee-d260672ba949 0xc002fd69f7 0xc002fd69f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002fd6a68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 18 16:43:34.826: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 18 16:43:34.826: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3670 /apis/apps/v1/namespaces/deployment-3670/replicasets/test-rollover-controller ceedfbe6-d3f3-4ea7-bd52-47733f8fad93 9212586 2 2020-02-18 16:43:01 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 120fa28b-bff2-4550-a7ee-d260672ba949 0xc002fd6927 0xc002fd6928}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002fd6988 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 18 16:43:34.826: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-3670 /apis/apps/v1/namespaces/deployment-3670/replicasets/test-rollover-deployment-f6c94f66c 3582262a-1ff3-4203-aaa9-8a1f8b585245 9212520 2 2020-02-18 16:43:10 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 120fa28b-bff2-4550-a7ee-d260672ba949 0xc002fd6ad0 0xc002fd6ad1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002fd6b48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 18 16:43:34.830: INFO: Pod "test-rollover-deployment-574d6dfbff-kht4h" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-kht4h test-rollover-deployment-574d6dfbff- deployment-3670 /api/v1/namespaces/deployment-3670/pods/test-rollover-deployment-574d6dfbff-kht4h 1b7c8c3d-2d81-4671-b4fa-31010b359f13 9212548 0 2020-02-18 16:43:12 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 8338a766-af73-4285-b3e6-4d7a9c100175 0xc002fd7097 0xc002fd7098}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rnsk8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rnsk8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rnsk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:43:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:43:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:43:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:43:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-18 16:43:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 16:43:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://fb48b5c81f84ffee45fa59d096817709d4ed55a033f64703efe435c4427fc72c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:43:34.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3670" for this suite. • [SLOW TEST:33.308 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":280,"completed":123,"skipped":1642,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:43:34.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:43:34.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9924" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":280,"completed":124,"skipped":1649,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:43:35.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a replication controller Feb 18 16:43:35.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9048' Feb 18 16:43:35.684: INFO: stderr: "" Feb 18 16:43:35.684: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 18 16:43:35.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9048' Feb 18 16:43:35.801: INFO: stderr: "" Feb 18 16:43:35.801: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Feb 18 16:43:40.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9048' Feb 18 16:43:42.806: INFO: stderr: "" Feb 18 16:43:42.807: INFO: stdout: "update-demo-nautilus-ggtsq update-demo-nautilus-jcvfj " Feb 18 16:43:42.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ggtsq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9048' Feb 18 16:43:43.208: INFO: stderr: "" Feb 18 16:43:43.208: INFO: stdout: "" Feb 18 16:43:43.208: INFO: update-demo-nautilus-ggtsq is created but not running Feb 18 16:43:48.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9048' Feb 18 16:43:48.389: INFO: stderr: "" Feb 18 16:43:48.389: INFO: stdout: "update-demo-nautilus-ggtsq update-demo-nautilus-jcvfj " Feb 18 16:43:48.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ggtsq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9048' Feb 18 16:43:48.495: INFO: stderr: "" Feb 18 16:43:48.495: INFO: stdout: "true" Feb 18 16:43:48.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ggtsq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9048' Feb 18 16:43:48.639: INFO: stderr: "" Feb 18 16:43:48.639: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 18 16:43:48.639: INFO: validating pod update-demo-nautilus-ggtsq Feb 18 16:43:48.649: INFO: got data: { "image": "nautilus.jpg" } Feb 18 16:43:48.649: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 18 16:43:48.649: INFO: update-demo-nautilus-ggtsq is verified up and running Feb 18 16:43:48.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcvfj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9048' Feb 18 16:43:48.736: INFO: stderr: "" Feb 18 16:43:48.736: INFO: stdout: "true" Feb 18 16:43:48.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcvfj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9048' Feb 18 16:43:48.878: INFO: stderr: "" Feb 18 16:43:48.879: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 18 16:43:48.879: INFO: validating pod update-demo-nautilus-jcvfj Feb 18 16:43:48.884: INFO: got data: { "image": "nautilus.jpg" } Feb 18 16:43:48.884: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 18 16:43:48.884: INFO: update-demo-nautilus-jcvfj is verified up and running STEP: scaling down the replication controller Feb 18 16:43:48.886: INFO: scanned /root for discovery docs: Feb 18 16:43:48.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9048' Feb 18 16:43:50.100: INFO: stderr: "" Feb 18 16:43:50.100: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 18 16:43:50.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9048' Feb 18 16:43:50.249: INFO: stderr: "" Feb 18 16:43:50.250: INFO: stdout: "update-demo-nautilus-ggtsq update-demo-nautilus-jcvfj " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 18 16:43:55.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9048' Feb 18 16:43:55.409: INFO: stderr: "" Feb 18 16:43:55.409: INFO: stdout: "update-demo-nautilus-ggtsq update-demo-nautilus-jcvfj " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 18 16:44:00.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9048' Feb 18 16:44:00.603: INFO: stderr: "" Feb 18 16:44:00.603: INFO: stdout: "update-demo-nautilus-ggtsq update-demo-nautilus-jcvfj " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 18 16:44:05.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9048' Feb 18 16:44:05.779: INFO: stderr: "" Feb 18 16:44:05.780: INFO: stdout: "update-demo-nautilus-jcvfj " Feb 18 16:44:05.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcvfj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9048' Feb 18 16:44:05.987: INFO: stderr: "" Feb 18 16:44:05.987: INFO: stdout: "true" Feb 18 16:44:05.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcvfj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9048' Feb 18 16:44:06.190: INFO: stderr: "" Feb 18 16:44:06.191: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 18 16:44:06.191: INFO: validating pod update-demo-nautilus-jcvfj Feb 18 16:44:06.196: INFO: got data: { "image": "nautilus.jpg" } Feb 18 16:44:06.196: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 18 16:44:06.196: INFO: update-demo-nautilus-jcvfj is verified up and running STEP: scaling up the replication controller Feb 18 16:44:06.198: INFO: scanned /root for discovery docs: Feb 18 16:44:06.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9048' Feb 18 16:44:07.375: INFO: stderr: "" Feb 18 16:44:07.376: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 18 16:44:07.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9048' Feb 18 16:44:07.601: INFO: stderr: "" Feb 18 16:44:07.601: INFO: stdout: "update-demo-nautilus-bfb95 update-demo-nautilus-jcvfj " Feb 18 16:44:07.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bfb95 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9048' Feb 18 16:44:07.700: INFO: stderr: "" Feb 18 16:44:07.701: INFO: stdout: "" Feb 18 16:44:07.701: INFO: update-demo-nautilus-bfb95 is created but not running Feb 18 16:44:12.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9048' Feb 18 16:44:12.837: INFO: stderr: "" Feb 18 16:44:12.837: INFO: stdout: "update-demo-nautilus-bfb95 update-demo-nautilus-jcvfj " Feb 18 16:44:12.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bfb95 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9048' Feb 18 16:44:12.938: INFO: stderr: "" Feb 18 16:44:12.938: INFO: stdout: "true" Feb 18 16:44:12.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bfb95 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9048' Feb 18 16:44:13.024: INFO: stderr: "" Feb 18 16:44:13.024: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 18 16:44:13.024: INFO: validating pod update-demo-nautilus-bfb95 Feb 18 16:44:13.031: INFO: got data: { "image": "nautilus.jpg" } Feb 18 16:44:13.032: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 18 16:44:13.032: INFO: update-demo-nautilus-bfb95 is verified up and running Feb 18 16:44:13.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcvfj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9048' Feb 18 16:44:13.112: INFO: stderr: "" Feb 18 16:44:13.112: INFO: stdout: "true" Feb 18 16:44:13.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcvfj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9048' Feb 18 16:44:13.276: INFO: stderr: "" Feb 18 16:44:13.276: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 18 16:44:13.276: INFO: validating pod update-demo-nautilus-jcvfj Feb 18 16:44:13.281: INFO: got data: { "image": "nautilus.jpg" } Feb 18 16:44:13.281: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 18 16:44:13.281: INFO: update-demo-nautilus-jcvfj is verified up and running STEP: using delete to clean up resources Feb 18 16:44:13.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9048' Feb 18 16:44:13.369: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 18 16:44:13.369: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 18 16:44:13.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9048' Feb 18 16:44:13.502: INFO: stderr: "No resources found in kubectl-9048 namespace.\n" Feb 18 16:44:13.502: INFO: stdout: "" Feb 18 16:44:13.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9048 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 18 16:44:13.612: INFO: stderr: "" Feb 18 16:44:13.613: INFO: stdout: "update-demo-nautilus-bfb95\nupdate-demo-nautilus-jcvfj\n" Feb 18 16:44:14.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9048' Feb 18 16:44:15.109: INFO: stderr: "No resources found in kubectl-9048 namespace.\n" Feb 18 16:44:15.109: INFO: stdout: "" Feb 18 16:44:15.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9048 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 18 16:44:15.373: INFO: stderr: "" Feb 18 16:44:15.373: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:44:15.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9048" for this suite. • [SLOW TEST:40.397 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":280,"completed":125,"skipped":1654,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:44:15.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:44:15.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 18 16:44:15.886: INFO: stderr: "" Feb 18 16:44:15.886: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.2.152+426b3538900329\", GitCommit:\"426b3538900329ed2ce5a0cb1cccf2f0ff32db60\", GitTreeState:\"clean\", BuildDate:\"2020-01-25T12:55:25Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:44:15.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6129" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":280,"completed":126,"skipped":1664,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:44:15.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1384 STEP: creating the pod Feb 18 16:44:16.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6243' Feb 18 16:44:17.535: INFO: stderr: "" Feb 18 16:44:17.535: INFO: stdout: "pod/pause created\n" Feb 18 16:44:17.535: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 18 16:44:17.536: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6243" to be "running and ready" Feb 18 16:44:17.630: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 94.45383ms Feb 18 16:44:19.670: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134294603s Feb 18 16:44:21.697: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161392183s Feb 18 16:44:23.707: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171398624s Feb 18 16:44:25.739: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.203436339s Feb 18 16:44:25.739: INFO: Pod "pause" satisfied condition "running and ready" Feb 18 16:44:25.739: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: adding the label testing-label with value testing-label-value to a pod Feb 18 16:44:25.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6243' Feb 18 16:44:25.971: INFO: stderr: "" Feb 18 16:44:25.971: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 18 16:44:25.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6243' Feb 18 16:44:26.133: INFO: stderr: "" Feb 18 16:44:26.133: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 18 16:44:26.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6243' Feb 18 16:44:26.281: INFO: stderr: "" Feb 18 16:44:26.281: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 18 16:44:26.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6243' Feb 18 16:44:26.440: INFO: stderr: "" Feb 18 16:44:26.440: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 STEP: using delete to clean up resources Feb 18 16:44:26.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6243' Feb 18 16:44:26.640: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 18 16:44:26.640: INFO: stdout: "pod \"pause\" force deleted\n" Feb 18 16:44:26.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6243' Feb 18 16:44:26.795: INFO: stderr: "No resources found in kubectl-6243 namespace.\n" Feb 18 16:44:26.795: INFO: stdout: "" Feb 18 16:44:26.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6243 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 18 16:44:26.891: INFO: stderr: "" Feb 18 16:44:26.891: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:44:26.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6243" for this suite. • [SLOW TEST:10.998 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1381 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":280,"completed":127,"skipped":1673,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:44:26.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Feb 18 16:44:35.844: INFO: Successfully updated pod "labelsupdate2e739519-e156-4528-ac0f-57f1be1c07ac" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:44:37.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-699" for this suite. • [SLOW TEST:11.017 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":128,"skipped":1731,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:44:37.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 18 16:44:38.097: INFO: Number of nodes with available pods: 0 Feb 18 16:44:38.097: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:44:39.113: INFO: Number of nodes with available pods: 0 Feb 18 16:44:39.113: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:44:40.112: INFO: Number of nodes with available pods: 0 Feb 18 16:44:40.112: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:44:41.572: INFO: Number of nodes with available pods: 0 Feb 18 16:44:41.572: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:44:42.124: INFO: Number of nodes with available pods: 0 Feb 18 16:44:42.124: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:44:43.153: INFO: Number of nodes with available pods: 0 Feb 18 16:44:43.153: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:44:45.413: INFO: Number of nodes with available pods: 0 Feb 18 16:44:45.414: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:44:46.133: INFO: Number of nodes with available pods: 0 Feb 18 16:44:46.134: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:44:47.413: INFO: Number of nodes with available pods: 0 Feb 18 16:44:47.413: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:44:48.255: INFO: Number of nodes with available pods: 0 Feb 18 16:44:48.255: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:44:49.109: INFO: Number of nodes with available pods: 0 Feb 18 16:44:49.109: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:44:50.110: INFO: Number of nodes with available pods: 2 Feb 18 16:44:50.110: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 18 16:44:50.185: INFO: Number of nodes with available pods: 2 Feb 18 16:44:50.185: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4947, will wait for the garbage collector to delete the pods Feb 18 16:44:51.729: INFO: Deleting DaemonSet.extensions daemon-set took: 10.542108ms Feb 18 16:44:52.430: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.736124ms Feb 18 16:44:58.436: INFO: Number of nodes with available pods: 0 Feb 18 16:44:58.436: INFO: Number of running nodes: 0, number of available pods: 0 Feb 18 16:44:58.439: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4947/daemonsets","resourceVersion":"9213013"},"items":null} Feb 18 16:44:58.462: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4947/pods","resourceVersion":"9213013"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:44:58.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4947" for this suite. • [SLOW TEST:20.567 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":280,"completed":129,"skipped":1741,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:44:58.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:44:58.633: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-128 I0218 16:44:58.647350 9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-128, replica count: 1 I0218 16:44:59.698025 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:45:00.698696 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:45:01.699828 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:45:02.700957 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:45:03.701822 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:45:04.702612 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0218 16:45:05.703055 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 18 16:45:05.847: INFO: Created: latency-svc-8w2ss Feb 18 16:45:05.871: INFO: Got endpoints: latency-svc-8w2ss [68.510346ms] Feb 18 16:45:05.959: INFO: Created: latency-svc-4jpqt Feb 18 16:45:05.970: INFO: Got endpoints: latency-svc-4jpqt [96.780726ms] Feb 18 16:45:06.049: INFO: Created: latency-svc-n46dv Feb 18 16:45:06.113: INFO: Got endpoints: latency-svc-n46dv [239.743626ms] Feb 18 16:45:06.114: INFO: Created: latency-svc-9h8xd Feb 18 16:45:06.201: INFO: Got endpoints: latency-svc-9h8xd [327.625968ms] Feb 18 16:45:06.213: INFO: Created: latency-svc-rqwhx Feb 18 16:45:06.241: INFO: Got endpoints: latency-svc-rqwhx [367.456171ms] Feb 18 16:45:06.267: INFO: Created: latency-svc-fxt82 Feb 18 16:45:06.292: INFO: Created: latency-svc-p8jgm Feb 18 16:45:06.293: INFO: Got endpoints: latency-svc-fxt82 [419.470797ms] Feb 18 16:45:06.397: INFO: Got endpoints: latency-svc-p8jgm [523.088304ms] Feb 18 16:45:06.404: INFO: Created: latency-svc-nlc7q Feb 18 16:45:06.410: INFO: Got endpoints: latency-svc-nlc7q [537.833794ms] Feb 18 16:45:06.432: INFO: Created: latency-svc-pxmp6 Feb 18 16:45:06.455: INFO: Got endpoints: latency-svc-pxmp6 [582.645621ms] Feb 18 16:45:06.494: INFO: Created: latency-svc-2rqv8 Feb 18 16:45:06.603: INFO: Got endpoints: latency-svc-2rqv8 [729.211127ms] Feb 18 16:45:06.612: INFO: Created: latency-svc-t6p56 Feb 18 16:45:06.622: INFO: Got endpoints: latency-svc-t6p56 [749.020328ms] Feb 18 16:45:06.644: INFO: Created: latency-svc-b9nxs Feb 18 16:45:06.649: INFO: Got endpoints: latency-svc-b9nxs [775.819981ms] Feb 18 16:45:06.676: INFO: Created: latency-svc-65fwk Feb 18 16:45:06.677: INFO: Got endpoints: latency-svc-65fwk [802.850595ms] Feb 18 16:45:06.734: INFO: Created: latency-svc-pdm6m Feb 18 16:45:06.741: INFO: Got endpoints: latency-svc-pdm6m [867.590193ms] Feb 18 16:45:06.761: INFO: Created: latency-svc-sgk8j Feb 18 16:45:06.784: INFO: Got endpoints: latency-svc-sgk8j [910.167792ms] Feb 18 16:45:06.790: INFO: Created: latency-svc-wk9vf Feb 18 16:45:06.791: INFO: Got endpoints: latency-svc-wk9vf [918.533109ms] Feb 18 16:45:06.809: INFO: Created: latency-svc-5w6l2 Feb 18 16:45:06.818: INFO: Got endpoints: latency-svc-5w6l2 [848.062166ms] Feb 18 16:45:06.884: INFO: Created: latency-svc-rv6q5 Feb 18 16:45:06.926: INFO: Created: latency-svc-487qq Feb 18 16:45:06.927: INFO: Got endpoints: latency-svc-487qq [725.643511ms] Feb 18 16:45:06.927: INFO: Got endpoints: latency-svc-rv6q5 [813.025626ms] Feb 18 16:45:06.971: INFO: Created: latency-svc-wm5h2 Feb 18 16:45:06.976: INFO: Got endpoints: latency-svc-wm5h2 [735.603954ms] Feb 18 16:45:07.034: INFO: Created: latency-svc-mrmk9 Feb 18 16:45:07.043: INFO: Got endpoints: latency-svc-mrmk9 [750.22268ms] Feb 18 16:45:07.093: INFO: Created: latency-svc-vh758 Feb 18 16:45:07.109: INFO: Got endpoints: latency-svc-vh758 [711.858485ms] Feb 18 16:45:07.178: INFO: Created: latency-svc-l7scp Feb 18 16:45:07.179: INFO: Got endpoints: latency-svc-l7scp [768.049422ms] Feb 18 16:45:07.212: INFO: Created: latency-svc-vp5wj Feb 18 16:45:07.226: INFO: Got endpoints: latency-svc-vp5wj [770.728671ms] Feb 18 16:45:07.255: INFO: Created: latency-svc-chsjn Feb 18 16:45:07.264: INFO: Got endpoints: latency-svc-chsjn [660.555321ms] Feb 18 16:45:07.318: INFO: Created: latency-svc-4bbhv Feb 18 16:45:07.322: INFO: Got endpoints: latency-svc-4bbhv [699.620425ms] Feb 18 16:45:07.357: INFO: Created: latency-svc-9ll22 Feb 18 16:45:07.364: INFO: Got endpoints: latency-svc-9ll22 [715.099179ms] Feb 18 16:45:07.387: INFO: Created: latency-svc-2s5df Feb 18 16:45:07.390: INFO: Got endpoints: latency-svc-2s5df [713.52038ms] Feb 18 16:45:07.410: INFO: Created: latency-svc-nhc6p Feb 18 16:45:07.413: INFO: Got endpoints: latency-svc-nhc6p [671.218106ms] Feb 18 16:45:07.481: INFO: Created: latency-svc-bjffn Feb 18 16:45:07.485: INFO: Got endpoints: latency-svc-bjffn [94.712517ms] Feb 18 16:45:07.529: INFO: Created: latency-svc-7xgvj Feb 18 16:45:07.538: INFO: Got endpoints: latency-svc-7xgvj [754.0182ms] Feb 18 16:45:07.569: INFO: Created: latency-svc-xpllz Feb 18 16:45:07.570: INFO: Got endpoints: latency-svc-xpllz [778.998108ms] Feb 18 16:45:07.627: INFO: Created: latency-svc-czcld Feb 18 16:45:07.658: INFO: Created: latency-svc-zjpfc Feb 18 16:45:07.659: INFO: Got endpoints: latency-svc-czcld [840.394989ms] Feb 18 16:45:07.751: INFO: Got endpoints: latency-svc-zjpfc [824.410794ms] Feb 18 16:45:07.761: INFO: Created: latency-svc-s8sd8 Feb 18 16:45:07.764: INFO: Got endpoints: latency-svc-s8sd8 [837.168218ms] Feb 18 16:45:07.797: INFO: Created: latency-svc-gsfjp Feb 18 16:45:07.889: INFO: Got endpoints: latency-svc-gsfjp [912.803784ms] Feb 18 16:45:07.899: INFO: Created: latency-svc-8b4l8 Feb 18 16:45:07.906: INFO: Got endpoints: latency-svc-8b4l8 [862.919873ms] Feb 18 16:45:07.924: INFO: Created: latency-svc-7vnfw Feb 18 16:45:07.934: INFO: Got endpoints: latency-svc-7vnfw [825.166554ms] Feb 18 16:45:07.957: INFO: Created: latency-svc-7npgn Feb 18 16:45:07.979: INFO: Got endpoints: latency-svc-7npgn [800.578373ms] Feb 18 16:45:07.982: INFO: Created: latency-svc-77tv8 Feb 18 16:45:08.026: INFO: Got endpoints: latency-svc-77tv8 [799.933881ms] Feb 18 16:45:08.044: INFO: Created: latency-svc-gth7c Feb 18 16:45:08.050: INFO: Got endpoints: latency-svc-gth7c [785.813725ms] Feb 18 16:45:08.108: INFO: Created: latency-svc-nh7h8 Feb 18 16:45:08.120: INFO: Got endpoints: latency-svc-nh7h8 [797.202909ms] Feb 18 16:45:08.169: INFO: Created: latency-svc-wz64s Feb 18 16:45:08.174: INFO: Got endpoints: latency-svc-wz64s [809.884762ms] Feb 18 16:45:08.201: INFO: Created: latency-svc-fn9mz Feb 18 16:45:08.202: INFO: Got endpoints: latency-svc-fn9mz [789.298825ms] Feb 18 16:45:08.253: INFO: Created: latency-svc-8pdt7 Feb 18 16:45:08.256: INFO: Got endpoints: latency-svc-8pdt7 [770.555398ms] Feb 18 16:45:08.310: INFO: Created: latency-svc-wclqx Feb 18 16:45:08.318: INFO: Got endpoints: latency-svc-wclqx [779.494049ms] Feb 18 16:45:08.367: INFO: Created: latency-svc-r68fv Feb 18 16:45:08.400: INFO: Created: latency-svc-2wlk8 Feb 18 16:45:08.413: INFO: Got endpoints: latency-svc-r68fv [842.957242ms] Feb 18 16:45:08.464: INFO: Got endpoints: latency-svc-2wlk8 [804.445076ms] Feb 18 16:45:08.476: INFO: Created: latency-svc-8gxtl Feb 18 16:45:08.477: INFO: Got endpoints: latency-svc-8gxtl [724.952802ms] Feb 18 16:45:08.542: INFO: Created: latency-svc-8p66b Feb 18 16:45:08.552: INFO: Got endpoints: latency-svc-8p66b [787.531483ms] Feb 18 16:45:08.615: INFO: Created: latency-svc-xfkcc Feb 18 16:45:08.627: INFO: Got endpoints: latency-svc-xfkcc [737.275691ms] Feb 18 16:45:08.658: INFO: Created: latency-svc-jm9cp Feb 18 16:45:08.659: INFO: Got endpoints: latency-svc-jm9cp [752.421831ms] Feb 18 16:45:08.821: INFO: Created: latency-svc-2frrr Feb 18 16:45:08.881: INFO: Got endpoints: latency-svc-2frrr [946.707385ms] Feb 18 16:45:08.894: INFO: Created: latency-svc-qkrf9 Feb 18 16:45:08.898: INFO: Got endpoints: latency-svc-qkrf9 [918.244372ms] Feb 18 16:45:08.980: INFO: Created: latency-svc-rb84g Feb 18 16:45:09.020: INFO: Got endpoints: latency-svc-rb84g [994.006948ms] Feb 18 16:45:09.027: INFO: Created: latency-svc-ccnzq Feb 18 16:45:09.045: INFO: Got endpoints: latency-svc-ccnzq [995.027325ms] Feb 18 16:45:09.064: INFO: Created: latency-svc-8plq2 Feb 18 16:45:09.071: INFO: Got endpoints: latency-svc-8plq2 [951.461527ms] Feb 18 16:45:09.313: INFO: Created: latency-svc-bghvk Feb 18 16:45:09.326: INFO: Got endpoints: latency-svc-bghvk [1.151782419s] Feb 18 16:45:09.375: INFO: Created: latency-svc-p8r2x Feb 18 16:45:09.478: INFO: Created: latency-svc-lrksb Feb 18 16:45:09.478: INFO: Got endpoints: latency-svc-p8r2x [1.275781126s] Feb 18 16:45:09.505: INFO: Got endpoints: latency-svc-lrksb [1.248845605s] Feb 18 16:45:09.530: INFO: Created: latency-svc-c4msf Feb 18 16:45:09.538: INFO: Got endpoints: latency-svc-c4msf [1.219967535s] Feb 18 16:45:09.579: INFO: Created: latency-svc-whq4l Feb 18 16:45:09.602: INFO: Got endpoints: latency-svc-whq4l [1.188565876s] Feb 18 16:45:09.616: INFO: Created: latency-svc-vhkm4 Feb 18 16:45:09.621: INFO: Got endpoints: latency-svc-vhkm4 [1.157422478s] Feb 18 16:45:09.657: INFO: Created: latency-svc-5n9fg Feb 18 16:45:09.657: INFO: Got endpoints: latency-svc-5n9fg [1.18029688s] Feb 18 16:45:09.680: INFO: Created: latency-svc-l9747 Feb 18 16:45:09.685: INFO: Got endpoints: latency-svc-l9747 [1.133220389s] Feb 18 16:45:09.740: INFO: Created: latency-svc-jn9hd Feb 18 16:45:09.748: INFO: Got endpoints: latency-svc-jn9hd [1.120604047s] Feb 18 16:45:09.784: INFO: Created: latency-svc-tx82b Feb 18 16:45:09.871: INFO: Got endpoints: latency-svc-tx82b [1.212455259s] Feb 18 16:45:09.873: INFO: Created: latency-svc-fwrgt Feb 18 16:45:09.883: INFO: Got endpoints: latency-svc-fwrgt [1.00199457s] Feb 18 16:45:09.911: INFO: Created: latency-svc-2zvp7 Feb 18 16:45:09.913: INFO: Got endpoints: latency-svc-2zvp7 [1.015444041s] Feb 18 16:45:09.934: INFO: Created: latency-svc-nbh5h Feb 18 16:45:09.956: INFO: Got endpoints: latency-svc-nbh5h [934.891329ms] Feb 18 16:45:09.963: INFO: Created: latency-svc-n9h8m Feb 18 16:45:10.027: INFO: Got endpoints: latency-svc-n9h8m [981.704749ms] Feb 18 16:45:10.036: INFO: Created: latency-svc-h5cqb Feb 18 16:45:10.045: INFO: Got endpoints: latency-svc-h5cqb [973.860011ms] Feb 18 16:45:10.095: INFO: Created: latency-svc-lwnc4 Feb 18 16:45:10.110: INFO: Got endpoints: latency-svc-lwnc4 [783.995147ms] Feb 18 16:45:10.180: INFO: Created: latency-svc-mxt5k Feb 18 16:45:10.195: INFO: Got endpoints: latency-svc-mxt5k [716.858104ms] Feb 18 16:45:10.227: INFO: Created: latency-svc-ntg8x Feb 18 16:45:10.232: INFO: Got endpoints: latency-svc-ntg8x [726.914187ms] Feb 18 16:45:10.254: INFO: Created: latency-svc-k64gw Feb 18 16:45:10.254: INFO: Got endpoints: latency-svc-k64gw [716.08026ms] Feb 18 16:45:10.364: INFO: Created: latency-svc-5ztg6 Feb 18 16:45:10.422: INFO: Got endpoints: latency-svc-5ztg6 [820.510724ms] Feb 18 16:45:10.559: INFO: Created: latency-svc-nqfpd Feb 18 16:45:10.566: INFO: Got endpoints: latency-svc-nqfpd [944.457999ms] Feb 18 16:45:10.651: INFO: Created: latency-svc-x92bp Feb 18 16:45:10.759: INFO: Got endpoints: latency-svc-x92bp [1.101802348s] Feb 18 16:45:10.777: INFO: Created: latency-svc-ksqwl Feb 18 16:45:10.794: INFO: Got endpoints: latency-svc-ksqwl [1.108567886s] Feb 18 16:45:10.818: INFO: Created: latency-svc-dvhbc Feb 18 16:45:10.819: INFO: Got endpoints: latency-svc-dvhbc [1.070658181s] Feb 18 16:45:10.847: INFO: Created: latency-svc-n9xww Feb 18 16:45:10.849: INFO: Got endpoints: latency-svc-n9xww [977.40338ms] Feb 18 16:45:10.915: INFO: Created: latency-svc-6d9gh Feb 18 16:45:10.932: INFO: Got endpoints: latency-svc-6d9gh [1.049002872s] Feb 18 16:45:10.961: INFO: Created: latency-svc-m8pdt Feb 18 16:45:10.985: INFO: Got endpoints: latency-svc-m8pdt [1.071756451s] Feb 18 16:45:11.077: INFO: Created: latency-svc-sfxk8 Feb 18 16:45:11.094: INFO: Got endpoints: latency-svc-sfxk8 [1.137735615s] Feb 18 16:45:11.098: INFO: Created: latency-svc-xpzf4 Feb 18 16:45:11.111: INFO: Got endpoints: latency-svc-xpzf4 [1.084088293s] Feb 18 16:45:11.142: INFO: Created: latency-svc-9kpdc Feb 18 16:45:11.150: INFO: Got endpoints: latency-svc-9kpdc [1.104965189s] Feb 18 16:45:11.172: INFO: Created: latency-svc-phdn5 Feb 18 16:45:11.218: INFO: Got endpoints: latency-svc-phdn5 [1.10750996s] Feb 18 16:45:11.232: INFO: Created: latency-svc-n8rnw Feb 18 16:45:11.242: INFO: Got endpoints: latency-svc-n8rnw [1.047422073s] Feb 18 16:45:11.260: INFO: Created: latency-svc-cg8dr Feb 18 16:45:11.272: INFO: Got endpoints: latency-svc-cg8dr [1.040403899s] Feb 18 16:45:11.274: INFO: Created: latency-svc-c64dz Feb 18 16:45:11.282: INFO: Got endpoints: latency-svc-c64dz [1.027311082s] Feb 18 16:45:11.317: INFO: Created: latency-svc-s58q4 Feb 18 16:45:11.400: INFO: Got endpoints: latency-svc-s58q4 [977.053651ms] Feb 18 16:45:11.426: INFO: Created: latency-svc-vhjfg Feb 18 16:45:11.443: INFO: Got endpoints: latency-svc-vhjfg [876.581349ms] Feb 18 16:45:11.468: INFO: Created: latency-svc-dqltb Feb 18 16:45:11.482: INFO: Got endpoints: latency-svc-dqltb [722.326809ms] Feb 18 16:45:11.568: INFO: Created: latency-svc-g82fn Feb 18 16:45:11.593: INFO: Got endpoints: latency-svc-g82fn [798.32596ms] Feb 18 16:45:11.636: INFO: Created: latency-svc-nr4zr Feb 18 16:45:11.643: INFO: Got endpoints: latency-svc-nr4zr [823.8659ms] Feb 18 16:45:11.738: INFO: Created: latency-svc-j885c Feb 18 16:45:11.738: INFO: Got endpoints: latency-svc-j885c [888.656851ms] Feb 18 16:45:11.800: INFO: Created: latency-svc-n4b52 Feb 18 16:45:11.804: INFO: Got endpoints: latency-svc-n4b52 [871.484498ms] Feb 18 16:45:11.885: INFO: Created: latency-svc-4sg27 Feb 18 16:45:11.928: INFO: Got endpoints: latency-svc-4sg27 [942.16119ms] Feb 18 16:45:11.930: INFO: Created: latency-svc-g7hh4 Feb 18 16:45:11.987: INFO: Got endpoints: latency-svc-g7hh4 [893.363801ms] Feb 18 16:45:12.103: INFO: Created: latency-svc-g6b95 Feb 18 16:45:12.116: INFO: Got endpoints: latency-svc-g6b95 [1.005303211s] Feb 18 16:45:12.168: INFO: Created: latency-svc-8dwjh Feb 18 16:45:12.175: INFO: Got endpoints: latency-svc-8dwjh [1.024933325s] Feb 18 16:45:12.284: INFO: Created: latency-svc-4j4kr Feb 18 16:45:12.345: INFO: Got endpoints: latency-svc-4j4kr [1.12657147s] Feb 18 16:45:12.346: INFO: Created: latency-svc-l29vm Feb 18 16:45:12.357: INFO: Got endpoints: latency-svc-l29vm [1.114983467s] Feb 18 16:45:12.488: INFO: Created: latency-svc-fwtf2 Feb 18 16:45:12.524: INFO: Got endpoints: latency-svc-fwtf2 [1.251650744s] Feb 18 16:45:12.585: INFO: Created: latency-svc-cb82s Feb 18 16:45:12.661: INFO: Got endpoints: latency-svc-cb82s [1.379479717s] Feb 18 16:45:12.677: INFO: Created: latency-svc-tfk5w Feb 18 16:45:12.718: INFO: Got endpoints: latency-svc-tfk5w [1.317788281s] Feb 18 16:45:12.721: INFO: Created: latency-svc-xh99h Feb 18 16:45:12.726: INFO: Got endpoints: latency-svc-xh99h [1.283005521s] Feb 18 16:45:12.842: INFO: Created: latency-svc-hjlgp Feb 18 16:45:12.857: INFO: Got endpoints: latency-svc-hjlgp [1.374812493s] Feb 18 16:45:12.886: INFO: Created: latency-svc-hngpg Feb 18 16:45:12.942: INFO: Got endpoints: latency-svc-hngpg [1.349357474s] Feb 18 16:45:12.946: INFO: Created: latency-svc-sdk8z Feb 18 16:45:13.051: INFO: Got endpoints: latency-svc-sdk8z [1.408132174s] Feb 18 16:45:13.067: INFO: Created: latency-svc-dzgj9 Feb 18 16:45:13.070: INFO: Got endpoints: latency-svc-dzgj9 [1.3325875s] Feb 18 16:45:13.156: INFO: Created: latency-svc-5x5kf Feb 18 16:45:13.208: INFO: Got endpoints: latency-svc-5x5kf [1.403713506s] Feb 18 16:45:13.232: INFO: Created: latency-svc-hqlvt Feb 18 16:45:13.242: INFO: Got endpoints: latency-svc-hqlvt [1.313824734s] Feb 18 16:45:13.279: INFO: Created: latency-svc-6sszq Feb 18 16:45:13.295: INFO: Got endpoints: latency-svc-6sszq [1.307389908s] Feb 18 16:45:13.422: INFO: Created: latency-svc-skfp2 Feb 18 16:45:13.423: INFO: Got endpoints: latency-svc-skfp2 [1.306283685s] Feb 18 16:45:13.449: INFO: Created: latency-svc-s798x Feb 18 16:45:13.473: INFO: Got endpoints: latency-svc-s798x [1.297708329s] Feb 18 16:45:13.477: INFO: Created: latency-svc-q7f9c Feb 18 16:45:13.480: INFO: Got endpoints: latency-svc-q7f9c [1.134698892s] Feb 18 16:45:13.504: INFO: Created: latency-svc-cxrzb Feb 18 16:45:13.573: INFO: Got endpoints: latency-svc-cxrzb [1.215388956s] Feb 18 16:45:13.586: INFO: Created: latency-svc-g4wf9 Feb 18 16:45:13.641: INFO: Got endpoints: latency-svc-g4wf9 [1.11681802s] Feb 18 16:45:13.647: INFO: Created: latency-svc-pnhlx Feb 18 16:45:13.651: INFO: Got endpoints: latency-svc-pnhlx [989.676079ms] Feb 18 16:45:13.725: INFO: Created: latency-svc-rhhnj Feb 18 16:45:13.777: INFO: Created: latency-svc-swkh6 Feb 18 16:45:13.781: INFO: Got endpoints: latency-svc-rhhnj [1.062543162s] Feb 18 16:45:13.802: INFO: Got endpoints: latency-svc-swkh6 [1.076082282s] Feb 18 16:45:13.886: INFO: Created: latency-svc-52mlg Feb 18 16:45:13.903: INFO: Got endpoints: latency-svc-52mlg [1.045796709s] Feb 18 16:45:13.913: INFO: Created: latency-svc-vp52w Feb 18 16:45:13.917: INFO: Got endpoints: latency-svc-vp52w [973.866928ms] Feb 18 16:45:13.948: INFO: Created: latency-svc-ltdfq Feb 18 16:45:13.959: INFO: Got endpoints: latency-svc-ltdfq [907.908998ms] Feb 18 16:45:14.042: INFO: Created: latency-svc-cdbzf Feb 18 16:45:14.047: INFO: Got endpoints: latency-svc-cdbzf [976.026229ms] Feb 18 16:45:14.111: INFO: Created: latency-svc-p2brt Feb 18 16:45:14.113: INFO: Got endpoints: latency-svc-p2brt [905.480619ms] Feb 18 16:45:14.183: INFO: Created: latency-svc-2q4vx Feb 18 16:45:14.186: INFO: Got endpoints: latency-svc-2q4vx [943.764853ms] Feb 18 16:45:14.248: INFO: Created: latency-svc-ttdt6 Feb 18 16:45:14.350: INFO: Got endpoints: latency-svc-ttdt6 [1.055103806s] Feb 18 16:45:14.359: INFO: Created: latency-svc-4kdcb Feb 18 16:45:14.376: INFO: Got endpoints: latency-svc-4kdcb [953.190852ms] Feb 18 16:45:14.427: INFO: Created: latency-svc-8ktzg Feb 18 16:45:14.525: INFO: Got endpoints: latency-svc-8ktzg [1.051803634s] Feb 18 16:45:14.529: INFO: Created: latency-svc-l7kl2 Feb 18 16:45:14.542: INFO: Got endpoints: latency-svc-l7kl2 [1.061734748s] Feb 18 16:45:14.557: INFO: Created: latency-svc-vvs2j Feb 18 16:45:14.563: INFO: Got endpoints: latency-svc-vvs2j [989.543638ms] Feb 18 16:45:14.582: INFO: Created: latency-svc-7d76w Feb 18 16:45:14.584: INFO: Got endpoints: latency-svc-7d76w [942.000774ms] Feb 18 16:45:14.599: INFO: Created: latency-svc-fkmhz Feb 18 16:45:14.619: INFO: Got endpoints: latency-svc-fkmhz [967.381436ms] Feb 18 16:45:14.620: INFO: Created: latency-svc-rblss Feb 18 16:45:14.659: INFO: Got endpoints: latency-svc-rblss [877.812491ms] Feb 18 16:45:14.682: INFO: Created: latency-svc-x5wkn Feb 18 16:45:14.695: INFO: Got endpoints: latency-svc-x5wkn [891.768377ms] Feb 18 16:45:14.719: INFO: Created: latency-svc-9html Feb 18 16:45:14.723: INFO: Got endpoints: latency-svc-9html [819.388633ms] Feb 18 16:45:14.747: INFO: Created: latency-svc-smlc6 Feb 18 16:45:14.798: INFO: Got endpoints: latency-svc-smlc6 [881.517473ms] Feb 18 16:45:14.809: INFO: Created: latency-svc-68vpj Feb 18 16:45:14.817: INFO: Got endpoints: latency-svc-68vpj [857.241417ms] Feb 18 16:45:14.834: INFO: Created: latency-svc-fb4pr Feb 18 16:45:14.837: INFO: Got endpoints: latency-svc-fb4pr [790.304222ms] Feb 18 16:45:14.869: INFO: Created: latency-svc-x5zjm Feb 18 16:45:14.882: INFO: Got endpoints: latency-svc-x5zjm [768.167967ms] Feb 18 16:45:14.937: INFO: Created: latency-svc-ghnl4 Feb 18 16:45:14.962: INFO: Created: latency-svc-ks6px Feb 18 16:45:14.964: INFO: Got endpoints: latency-svc-ghnl4 [778.064612ms] Feb 18 16:45:14.971: INFO: Got endpoints: latency-svc-ks6px [620.375511ms] Feb 18 16:45:14.999: INFO: Created: latency-svc-95l8x Feb 18 16:45:15.019: INFO: Got endpoints: latency-svc-95l8x [642.698481ms] Feb 18 16:45:15.095: INFO: Created: latency-svc-gsqpv Feb 18 16:45:15.146: INFO: Created: latency-svc-nz2m4 Feb 18 16:45:15.146: INFO: Got endpoints: latency-svc-gsqpv [620.55996ms] Feb 18 16:45:15.157: INFO: Got endpoints: latency-svc-nz2m4 [614.762803ms] Feb 18 16:45:15.178: INFO: Created: latency-svc-sfvzw Feb 18 16:45:15.181: INFO: Got endpoints: latency-svc-sfvzw [617.219296ms] Feb 18 16:45:15.251: INFO: Created: latency-svc-z69x8 Feb 18 16:45:15.276: INFO: Got endpoints: latency-svc-z69x8 [692.316658ms] Feb 18 16:45:15.277: INFO: Created: latency-svc-fmf7g Feb 18 16:45:15.294: INFO: Got endpoints: latency-svc-fmf7g [675.138138ms] Feb 18 16:45:15.297: INFO: Created: latency-svc-f9xs2 Feb 18 16:45:15.439: INFO: Got endpoints: latency-svc-f9xs2 [780.273781ms] Feb 18 16:45:15.445: INFO: Created: latency-svc-x59sm Feb 18 16:45:15.452: INFO: Got endpoints: latency-svc-x59sm [756.678644ms] Feb 18 16:45:15.474: INFO: Created: latency-svc-gbzp4 Feb 18 16:45:15.486: INFO: Got endpoints: latency-svc-gbzp4 [763.083763ms] Feb 18 16:45:15.509: INFO: Created: latency-svc-gzksv Feb 18 16:45:15.524: INFO: Got endpoints: latency-svc-gzksv [725.260599ms] Feb 18 16:45:15.574: INFO: Created: latency-svc-scfqp Feb 18 16:45:15.579: INFO: Got endpoints: latency-svc-scfqp [762.096338ms] Feb 18 16:45:15.602: INFO: Created: latency-svc-bzwvn Feb 18 16:45:15.605: INFO: Got endpoints: latency-svc-bzwvn [767.768998ms] Feb 18 16:45:15.632: INFO: Created: latency-svc-9ltpx Feb 18 16:45:15.648: INFO: Got endpoints: latency-svc-9ltpx [765.984314ms] Feb 18 16:45:15.667: INFO: Created: latency-svc-nvmwk Feb 18 16:45:15.738: INFO: Got endpoints: latency-svc-nvmwk [773.404476ms] Feb 18 16:45:15.749: INFO: Created: latency-svc-2mc5h Feb 18 16:45:15.757: INFO: Got endpoints: latency-svc-2mc5h [785.805546ms] Feb 18 16:45:15.779: INFO: Created: latency-svc-zwh8w Feb 18 16:45:15.795: INFO: Got endpoints: latency-svc-zwh8w [775.778201ms] Feb 18 16:45:15.819: INFO: Created: latency-svc-zclht Feb 18 16:45:15.902: INFO: Got endpoints: latency-svc-zclht [756.154469ms] Feb 18 16:45:15.911: INFO: Created: latency-svc-6vmrf Feb 18 16:45:15.943: INFO: Created: latency-svc-bbsnd Feb 18 16:45:15.943: INFO: Got endpoints: latency-svc-6vmrf [786.417164ms] Feb 18 16:45:15.960: INFO: Got endpoints: latency-svc-bbsnd [779.342774ms] Feb 18 16:45:15.980: INFO: Created: latency-svc-97zs6 Feb 18 16:45:15.988: INFO: Got endpoints: latency-svc-97zs6 [711.232679ms] Feb 18 16:45:16.059: INFO: Created: latency-svc-t65xj Feb 18 16:45:16.063: INFO: Got endpoints: latency-svc-t65xj [768.862191ms] Feb 18 16:45:16.119: INFO: Created: latency-svc-n5pf4 Feb 18 16:45:16.126: INFO: Got endpoints: latency-svc-n5pf4 [687.241981ms] Feb 18 16:45:16.275: INFO: Created: latency-svc-kz25z Feb 18 16:45:16.287: INFO: Got endpoints: latency-svc-kz25z [835.06879ms] Feb 18 16:45:16.351: INFO: Created: latency-svc-2wtcw Feb 18 16:45:16.352: INFO: Got endpoints: latency-svc-2wtcw [865.701862ms] Feb 18 16:45:16.521: INFO: Created: latency-svc-6292b Feb 18 16:45:16.524: INFO: Got endpoints: latency-svc-6292b [1.000319826s] Feb 18 16:45:16.559: INFO: Created: latency-svc-hh5c9 Feb 18 16:45:16.578: INFO: Got endpoints: latency-svc-hh5c9 [999.164919ms] Feb 18 16:45:16.599: INFO: Created: latency-svc-76dt8 Feb 18 16:45:16.680: INFO: Created: latency-svc-zh4kf Feb 18 16:45:16.681: INFO: Got endpoints: latency-svc-76dt8 [1.076070307s] Feb 18 16:45:16.695: INFO: Got endpoints: latency-svc-zh4kf [1.047222193s] Feb 18 16:45:16.725: INFO: Created: latency-svc-6k7bc Feb 18 16:45:16.747: INFO: Got endpoints: latency-svc-6k7bc [1.008926973s] Feb 18 16:45:16.753: INFO: Created: latency-svc-jtbhh Feb 18 16:45:16.770: INFO: Got endpoints: latency-svc-jtbhh [1.01293685s] Feb 18 16:45:16.827: INFO: Created: latency-svc-snpzc Feb 18 16:45:16.858: INFO: Got endpoints: latency-svc-snpzc [1.063233939s] Feb 18 16:45:16.861: INFO: Created: latency-svc-nhvk6 Feb 18 16:45:16.864: INFO: Got endpoints: latency-svc-nhvk6 [961.315369ms] Feb 18 16:45:16.910: INFO: Created: latency-svc-zwdwj Feb 18 16:45:16.980: INFO: Got endpoints: latency-svc-zwdwj [1.036738728s] Feb 18 16:45:16.992: INFO: Created: latency-svc-xc9hg Feb 18 16:45:17.019: INFO: Got endpoints: latency-svc-xc9hg [1.058551377s] Feb 18 16:45:17.022: INFO: Created: latency-svc-5fc7v Feb 18 16:45:17.027: INFO: Got endpoints: latency-svc-5fc7v [1.038881641s] Feb 18 16:45:17.057: INFO: Created: latency-svc-fwl6r Feb 18 16:45:17.063: INFO: Got endpoints: latency-svc-fwl6r [999.260567ms] Feb 18 16:45:17.122: INFO: Created: latency-svc-rbrns Feb 18 16:45:17.145: INFO: Got endpoints: latency-svc-rbrns [1.017777197s] Feb 18 16:45:17.154: INFO: Created: latency-svc-xk6jv Feb 18 16:45:17.180: INFO: Got endpoints: latency-svc-xk6jv [892.669879ms] Feb 18 16:45:17.181: INFO: Created: latency-svc-bw2pw Feb 18 16:45:17.200: INFO: Got endpoints: latency-svc-bw2pw [847.914782ms] Feb 18 16:45:17.220: INFO: Created: latency-svc-mgl47 Feb 18 16:45:17.327: INFO: Got endpoints: latency-svc-mgl47 [802.330921ms] Feb 18 16:45:17.397: INFO: Created: latency-svc-4dflb Feb 18 16:45:17.399: INFO: Created: latency-svc-6d7pf Feb 18 16:45:17.422: INFO: Got endpoints: latency-svc-4dflb [843.726055ms] Feb 18 16:45:17.424: INFO: Got endpoints: latency-svc-6d7pf [742.79337ms] Feb 18 16:45:17.463: INFO: Created: latency-svc-vwnzs Feb 18 16:45:17.468: INFO: Got endpoints: latency-svc-vwnzs [771.997376ms] Feb 18 16:45:17.489: INFO: Created: latency-svc-b75v7 Feb 18 16:45:17.499: INFO: Got endpoints: latency-svc-b75v7 [751.712898ms] Feb 18 16:45:17.522: INFO: Created: latency-svc-2sx8b Feb 18 16:45:17.527: INFO: Got endpoints: latency-svc-2sx8b [757.016187ms] Feb 18 16:45:17.562: INFO: Created: latency-svc-d9bg4 Feb 18 16:45:17.611: INFO: Got endpoints: latency-svc-d9bg4 [752.448305ms] Feb 18 16:45:17.638: INFO: Created: latency-svc-dch9p Feb 18 16:45:17.648: INFO: Got endpoints: latency-svc-dch9p [783.515964ms] Feb 18 16:45:17.679: INFO: Created: latency-svc-4pnkp Feb 18 16:45:17.681: INFO: Got endpoints: latency-svc-4pnkp [700.76827ms] Feb 18 16:45:17.699: INFO: Created: latency-svc-gkm9n Feb 18 16:45:17.707: INFO: Got endpoints: latency-svc-gkm9n [688.142299ms] Feb 18 16:45:17.743: INFO: Created: latency-svc-ptqbf Feb 18 16:45:17.780: INFO: Got endpoints: latency-svc-ptqbf [753.531273ms] Feb 18 16:45:17.784: INFO: Created: latency-svc-lsdm6 Feb 18 16:45:17.803: INFO: Got endpoints: latency-svc-lsdm6 [740.135995ms] Feb 18 16:45:17.824: INFO: Created: latency-svc-d9qcp Feb 18 16:45:17.833: INFO: Got endpoints: latency-svc-d9qcp [688.408342ms] Feb 18 16:45:17.889: INFO: Created: latency-svc-t7k77 Feb 18 16:45:17.898: INFO: Got endpoints: latency-svc-t7k77 [717.403268ms] Feb 18 16:45:17.942: INFO: Created: latency-svc-vsz8h Feb 18 16:45:17.963: INFO: Got endpoints: latency-svc-vsz8h [763.405796ms] Feb 18 16:45:18.034: INFO: Created: latency-svc-wxc7r Feb 18 16:45:18.042: INFO: Got endpoints: latency-svc-wxc7r [714.841482ms] Feb 18 16:45:18.083: INFO: Created: latency-svc-xrnb9 Feb 18 16:45:18.090: INFO: Got endpoints: latency-svc-xrnb9 [667.498649ms] Feb 18 16:45:18.090: INFO: Latencies: [94.712517ms 96.780726ms 239.743626ms 327.625968ms 367.456171ms 419.470797ms 523.088304ms 537.833794ms 582.645621ms 614.762803ms 617.219296ms 620.375511ms 620.55996ms 642.698481ms 660.555321ms 667.498649ms 671.218106ms 675.138138ms 687.241981ms 688.142299ms 688.408342ms 692.316658ms 699.620425ms 700.76827ms 711.232679ms 711.858485ms 713.52038ms 714.841482ms 715.099179ms 716.08026ms 716.858104ms 717.403268ms 722.326809ms 724.952802ms 725.260599ms 725.643511ms 726.914187ms 729.211127ms 735.603954ms 737.275691ms 740.135995ms 742.79337ms 749.020328ms 750.22268ms 751.712898ms 752.421831ms 752.448305ms 753.531273ms 754.0182ms 756.154469ms 756.678644ms 757.016187ms 762.096338ms 763.083763ms 763.405796ms 765.984314ms 767.768998ms 768.049422ms 768.167967ms 768.862191ms 770.555398ms 770.728671ms 771.997376ms 773.404476ms 775.778201ms 775.819981ms 778.064612ms 778.998108ms 779.342774ms 779.494049ms 780.273781ms 783.515964ms 783.995147ms 785.805546ms 785.813725ms 786.417164ms 787.531483ms 789.298825ms 790.304222ms 797.202909ms 798.32596ms 799.933881ms 800.578373ms 802.330921ms 802.850595ms 804.445076ms 809.884762ms 813.025626ms 819.388633ms 820.510724ms 823.8659ms 824.410794ms 825.166554ms 835.06879ms 837.168218ms 840.394989ms 842.957242ms 843.726055ms 847.914782ms 848.062166ms 857.241417ms 862.919873ms 865.701862ms 867.590193ms 871.484498ms 876.581349ms 877.812491ms 881.517473ms 888.656851ms 891.768377ms 892.669879ms 893.363801ms 905.480619ms 907.908998ms 910.167792ms 912.803784ms 918.244372ms 918.533109ms 934.891329ms 942.000774ms 942.16119ms 943.764853ms 944.457999ms 946.707385ms 951.461527ms 953.190852ms 961.315369ms 967.381436ms 973.860011ms 973.866928ms 976.026229ms 977.053651ms 977.40338ms 981.704749ms 989.543638ms 989.676079ms 994.006948ms 995.027325ms 999.164919ms 999.260567ms 1.000319826s 1.00199457s 1.005303211s 1.008926973s 1.01293685s 1.015444041s 1.017777197s 1.024933325s 1.027311082s 1.036738728s 1.038881641s 1.040403899s 1.045796709s 1.047222193s 1.047422073s 1.049002872s 1.051803634s 1.055103806s 1.058551377s 1.061734748s 1.062543162s 1.063233939s 1.070658181s 1.071756451s 1.076070307s 1.076082282s 1.084088293s 1.101802348s 1.104965189s 1.10750996s 1.108567886s 1.114983467s 1.11681802s 1.120604047s 1.12657147s 1.133220389s 1.134698892s 1.137735615s 1.151782419s 1.157422478s 1.18029688s 1.188565876s 1.212455259s 1.215388956s 1.219967535s 1.248845605s 1.251650744s 1.275781126s 1.283005521s 1.297708329s 1.306283685s 1.307389908s 1.313824734s 1.317788281s 1.3325875s 1.349357474s 1.374812493s 1.379479717s 1.403713506s 1.408132174s] Feb 18 16:45:18.090: INFO: 50 %ile: 857.241417ms Feb 18 16:45:18.090: INFO: 90 %ile: 1.18029688s Feb 18 16:45:18.090: INFO: 99 %ile: 1.403713506s Feb 18 16:45:18.090: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:45:18.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-128" for this suite. • [SLOW TEST:19.616 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":280,"completed":130,"skipped":1742,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:45:18.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 18 16:45:18.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Feb 18 16:45:20.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:45:23.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:45:25.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:45:27.547: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:45:29.009: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:45:30.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641118, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 16:45:34.011: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Feb 18 16:45:34.053: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:45:34.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1249" for this suite. STEP: Destroying namespace "webhook-1249-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:16.278 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":280,"completed":131,"skipped":1749,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:45:34.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 18 16:45:36.254: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 18 16:45:38.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:45:40.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:45:42.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:45:44.730: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:45:46.549: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:45:48.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641136, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 16:45:51.669: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:45:51.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4132-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:45:53.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5034" for this suite. STEP: Destroying namespace "webhook-5034-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:18.961 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":280,"completed":132,"skipped":1758,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:45:53.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 18 16:46:13.666: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 18 16:46:13.687: INFO: Pod pod-with-prestop-exec-hook still exists Feb 18 16:46:15.687: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 18 16:46:15.692: INFO: Pod pod-with-prestop-exec-hook still exists Feb 18 16:46:17.687: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 18 16:46:17.700: INFO: Pod pod-with-prestop-exec-hook still exists Feb 18 16:46:19.687: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 18 16:46:19.696: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:46:19.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6155" for this suite. • [SLOW TEST:26.393 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":280,"completed":133,"skipped":1827,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:46:19.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:47:19.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5631" for this suite. • [SLOW TEST:60.152 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":280,"completed":134,"skipped":1836,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:47:19.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 18 16:47:28.102: INFO: &Pod{ObjectMeta:{send-events-87162fd9-dec5-4cb4-b580-295776e1b6f0 events-9878 /api/v1/namespaces/events-9878/pods/send-events-87162fd9-dec5-4cb4-b580-295776e1b6f0 492455b7-7e8a-45af-b0df-7f8861861e96 9214817 0 2020-02-18 16:47:20 +0000 UTC map[name:foo time:35784683] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rsck4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rsck4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rsck4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:47:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:47:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:47:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:47:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-18 16:47:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 16:47:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://fe7e5d337439bec73d8d498bea6da45b77c151bb947572970e7fe72172ecc1ec,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Feb 18 16:47:30.108: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 18 16:47:32.155: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:47:32.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9878" for this suite. • [SLOW TEST:12.431 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":280,"completed":135,"skipped":1933,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:47:32.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:47:32.467: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-a93c68fa-4657-4c4e-91db-686769178337" in namespace "security-context-test-4062" to be "success or failure" Feb 18 16:47:32.476: INFO: Pod "busybox-readonly-false-a93c68fa-4657-4c4e-91db-686769178337": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223822ms Feb 18 16:47:34.509: INFO: Pod "busybox-readonly-false-a93c68fa-4657-4c4e-91db-686769178337": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041181072s Feb 18 16:47:36.896: INFO: Pod "busybox-readonly-false-a93c68fa-4657-4c4e-91db-686769178337": Phase="Pending", Reason="", readiness=false. Elapsed: 4.428294375s Feb 18 16:47:38.900: INFO: Pod "busybox-readonly-false-a93c68fa-4657-4c4e-91db-686769178337": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432712821s Feb 18 16:47:40.908: INFO: Pod "busybox-readonly-false-a93c68fa-4657-4c4e-91db-686769178337": Phase="Pending", Reason="", readiness=false. Elapsed: 8.440847752s Feb 18 16:47:42.924: INFO: Pod "busybox-readonly-false-a93c68fa-4657-4c4e-91db-686769178337": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.456210567s Feb 18 16:47:42.924: INFO: Pod "busybox-readonly-false-a93c68fa-4657-4c4e-91db-686769178337" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:47:42.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4062" for this suite. • [SLOW TEST:10.607 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":280,"completed":136,"skipped":1962,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:47:42.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:47:43.016: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-0f92e975-9ed6-4f68-839b-5632e43f8e36" in namespace "security-context-test-937" to be "success or failure" Feb 18 16:47:43.083: INFO: Pod "busybox-privileged-false-0f92e975-9ed6-4f68-839b-5632e43f8e36": Phase="Pending", Reason="", readiness=false. Elapsed: 66.462602ms Feb 18 16:47:45.095: INFO: Pod "busybox-privileged-false-0f92e975-9ed6-4f68-839b-5632e43f8e36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078462085s Feb 18 16:47:47.099: INFO: Pod "busybox-privileged-false-0f92e975-9ed6-4f68-839b-5632e43f8e36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082901022s Feb 18 16:47:49.107: INFO: Pod "busybox-privileged-false-0f92e975-9ed6-4f68-839b-5632e43f8e36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090440715s Feb 18 16:47:51.115: INFO: Pod "busybox-privileged-false-0f92e975-9ed6-4f68-839b-5632e43f8e36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098364319s Feb 18 16:47:51.115: INFO: Pod "busybox-privileged-false-0f92e975-9ed6-4f68-839b-5632e43f8e36" satisfied condition "success or failure" Feb 18 16:47:51.139: INFO: Got logs for pod "busybox-privileged-false-0f92e975-9ed6-4f68-839b-5632e43f8e36": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:47:51.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-937" for this suite. • [SLOW TEST:8.215 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":137,"skipped":1970,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:47:51.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 18 16:48:01.630: INFO: 10 pods remaining Feb 18 16:48:01.630: INFO: 0 pods has nil DeletionTimestamp Feb 18 16:48:01.630: INFO: Feb 18 16:48:02.422: INFO: 0 pods remaining Feb 18 16:48:02.423: INFO: 0 pods has nil DeletionTimestamp Feb 18 16:48:02.423: INFO: STEP: Gathering metrics W0218 16:48:04.086274 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 18 16:48:04.086: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:48:04.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6472" for this suite. • [SLOW TEST:13.319 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":280,"completed":138,"skipped":1994,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:48:04.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 18 16:48:31.438: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 18 16:48:31.458: INFO: Pod pod-with-prestop-http-hook still exists Feb 18 16:48:33.459: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 18 16:48:33.564: INFO: Pod pod-with-prestop-http-hook still exists Feb 18 16:48:35.459: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 18 16:48:35.465: INFO: Pod pod-with-prestop-http-hook still exists Feb 18 16:48:37.459: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 18 16:48:37.466: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:48:37.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5644" for this suite. • [SLOW TEST:33.030 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":280,"completed":139,"skipped":2008,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:48:37.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:49:11.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9764" for this suite. • [SLOW TEST:34.162 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":280,"completed":140,"skipped":2088,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:49:11.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 18 16:49:22.083: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:49:22.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8220" for this suite. • [SLOW TEST:10.667 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":280,"completed":141,"skipped":2112,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:49:22.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:49:22.553: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 18 16:49:27.559: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 18 16:49:29.571: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Feb 18 16:49:29.623: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7634 /apis/apps/v1/namespaces/deployment-7634/deployments/test-cleanup-deployment 42e5c33b-6a5b-4162-bc92-7a786ec048b8 9215432 1 2020-02-18 16:49:29 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a7d268 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Feb 18 16:49:29.653: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-7634 /apis/apps/v1/namespaces/deployment-7634/replicasets/test-cleanup-deployment-55ffc6b7b6 cdec16e4-6643-4817-a354-c750cf5f762d 9215434 1 2020-02-18 16:49:29 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 42e5c33b-6a5b-4162-bc92-7a786ec048b8 0xc0028bcb07 0xc0028bcb08}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028bcb78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 18 16:49:29.653: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 18 16:49:29.654: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7634 /apis/apps/v1/namespaces/deployment-7634/replicasets/test-cleanup-controller 5ae38f39-cb8f-40b0-9781-40e0f7f2c669 9215433 1 2020-02-18 16:49:22 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 42e5c33b-6a5b-4162-bc92-7a786ec048b8 0xc0028bca0f 0xc0028bca20}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0028bca88 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 18 16:49:29.678: INFO: Pod "test-cleanup-controller-dtx28" is available: &Pod{ObjectMeta:{test-cleanup-controller-dtx28 test-cleanup-controller- deployment-7634 /api/v1/namespaces/deployment-7634/pods/test-cleanup-controller-dtx28 a1e21a9b-5376-4751-8e35-4ad4b9b2e18c 9215430 0 2020-02-18 16:49:22 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 5ae38f39-cb8f-40b0-9781-40e0f7f2c669 0xc0028bcfd7 0xc0028bcfd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vwkhc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vwkhc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vwkhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:49:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:49:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:49:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:49:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-18 16:49:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 16:49:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c6e367b681c5133b10829badfa7c573a829f392413363eb9d3cc9a91b9d135b3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 18 16:49:29.678: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-xbb2h" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-xbb2h test-cleanup-deployment-55ffc6b7b6- deployment-7634 /api/v1/namespaces/deployment-7634/pods/test-cleanup-deployment-55ffc6b7b6-xbb2h 8fac56b4-8191-4d6d-9abe-b6ea6e5f69bf 9215438 0 2020-02-18 16:49:29 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 cdec16e4-6643-4817-a354-c750cf5f762d 0xc0028bd157 0xc0028bd158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vwkhc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vwkhc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vwkhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:49:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:49:29.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7634" for this suite. • [SLOW TEST:7.416 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":280,"completed":142,"skipped":2143,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:49:29.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Feb 18 16:49:30.646: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Feb 18 16:49:32.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:49:34.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:49:36.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:49:38.733: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:49:40.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641370, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 16:49:43.777: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:49:43.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:49:45.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2078" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:15.650 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":280,"completed":143,"skipped":2144,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:49:45.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:49:45.574: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:49:52.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8810" for this suite. • [SLOW TEST:7.477 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":280,"completed":144,"skipped":2169,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:49:52.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-df0da257-becf-4cfa-add1-88f7a694571a STEP: Creating a pod to test consume secrets Feb 18 16:49:52.987: INFO: Waiting up to 5m0s for pod "pod-secrets-13dbe443-ed73-47b7-8d3c-25d805baa1ad" in namespace "secrets-220" to be "success or failure" Feb 18 16:49:53.002: INFO: Pod "pod-secrets-13dbe443-ed73-47b7-8d3c-25d805baa1ad": Phase="Pending", Reason="", readiness=false. Elapsed: 15.24969ms Feb 18 16:49:55.009: INFO: Pod "pod-secrets-13dbe443-ed73-47b7-8d3c-25d805baa1ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022715048s Feb 18 16:49:57.017: INFO: Pod "pod-secrets-13dbe443-ed73-47b7-8d3c-25d805baa1ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029923138s Feb 18 16:49:59.025: INFO: Pod "pod-secrets-13dbe443-ed73-47b7-8d3c-25d805baa1ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03856401s Feb 18 16:50:01.033: INFO: Pod "pod-secrets-13dbe443-ed73-47b7-8d3c-25d805baa1ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046369023s STEP: Saw pod success Feb 18 16:50:01.033: INFO: Pod "pod-secrets-13dbe443-ed73-47b7-8d3c-25d805baa1ad" satisfied condition "success or failure" Feb 18 16:50:01.037: INFO: Trying to get logs from node jerma-node pod pod-secrets-13dbe443-ed73-47b7-8d3c-25d805baa1ad container secret-env-test: STEP: delete the pod Feb 18 16:50:01.284: INFO: Waiting for pod pod-secrets-13dbe443-ed73-47b7-8d3c-25d805baa1ad to disappear Feb 18 16:50:01.300: INFO: Pod pod-secrets-13dbe443-ed73-47b7-8d3c-25d805baa1ad no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:50:01.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-220" for this suite. • [SLOW TEST:8.462 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":280,"completed":145,"skipped":2249,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:50:01.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 18 16:50:17.580: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 18 16:50:17.603: INFO: Pod pod-with-poststart-http-hook still exists Feb 18 16:50:19.604: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 18 16:50:19.615: INFO: Pod pod-with-poststart-http-hook still exists Feb 18 16:50:21.604: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 18 16:50:21.614: INFO: Pod pod-with-poststart-http-hook still exists Feb 18 16:50:23.604: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 18 16:50:23.615: INFO: Pod pod-with-poststart-http-hook still exists Feb 18 16:50:25.604: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 18 16:50:25.610: INFO: Pod pod-with-poststart-http-hook still exists Feb 18 16:50:27.603: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 18 16:50:27.611: INFO: Pod pod-with-poststart-http-hook still exists Feb 18 16:50:29.604: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 18 16:50:29.612: INFO: Pod pod-with-poststart-http-hook still exists Feb 18 16:50:31.604: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 18 16:50:31.612: INFO: Pod pod-with-poststart-http-hook still exists Feb 18 16:50:33.603: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 18 16:50:33.613: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:50:33.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7958" for this suite. • [SLOW TEST:32.282 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":280,"completed":146,"skipped":2262,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:50:33.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:50:33.728: INFO: Creating deployment "test-recreate-deployment" Feb 18 16:50:33.734: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 18 16:50:33.752: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Feb 18 16:50:35.779: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 18 16:50:35.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641433, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641433, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641433, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641433, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:50:37.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641433, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641433, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641433, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641433, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:50:39.877: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641433, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641433, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641433, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641433, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:50:41.796: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 18 16:50:41.808: INFO: Updating deployment test-recreate-deployment Feb 18 16:50:41.808: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Feb 18 16:50:42.100: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2215 /apis/apps/v1/namespaces/deployment-2215/deployments/test-recreate-deployment 59395211-d669-4add-b8b4-9d0fb5dac296 9215860 2 2020-02-18 16:50:33 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f333b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-18 16:50:42 +0000 UTC,LastTransitionTime:2020-02-18 16:50:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-02-18 16:50:42 +0000 UTC,LastTransitionTime:2020-02-18 16:50:33 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Feb 18 16:50:42.108: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-2215 /apis/apps/v1/namespaces/deployment-2215/replicasets/test-recreate-deployment-5f94c574ff 701a47e6-80f9-4473-bc2a-cead72415ffa 9215859 1 2020-02-18 16:50:41 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 59395211-d669-4add-b8b4-9d0fb5dac296 0xc002f33787 0xc002f33788}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f337e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 18 16:50:42.108: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 18 16:50:42.108: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-2215 /apis/apps/v1/namespaces/deployment-2215/replicasets/test-recreate-deployment-799c574856 ca8cd3c6-99e6-441f-9f89-a87883b80a52 9215849 2 2020-02-18 16:50:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 59395211-d669-4add-b8b4-9d0fb5dac296 0xc002f33857 0xc002f33858}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f338c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 18 16:50:42.112: INFO: Pod "test-recreate-deployment-5f94c574ff-fzsks" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-fzsks test-recreate-deployment-5f94c574ff- deployment-2215 /api/v1/namespaces/deployment-2215/pods/test-recreate-deployment-5f94c574ff-fzsks 794a8c63-b6cd-4782-922b-0d62138e76c0 9215861 0 2020-02-18 16:50:41 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 701a47e6-80f9-4473-bc2a-cead72415ffa 0xc003cabd37 0xc003cabd38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2thmt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2thmt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2thmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:50:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:50:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:50:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 16:50:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-18 16:50:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:50:42.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2215" for this suite. • [SLOW TEST:8.497 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":147,"skipped":2296,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:50:42.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 16:50:42.315: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 18 16:50:42.401: INFO: Number of nodes with available pods: 0 Feb 18 16:50:42.401: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:50:44.490: INFO: Number of nodes with available pods: 0 Feb 18 16:50:44.491: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:50:46.343: INFO: Number of nodes with available pods: 0 Feb 18 16:50:46.343: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:50:46.591: INFO: Number of nodes with available pods: 0 Feb 18 16:50:46.592: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:50:47.473: INFO: Number of nodes with available pods: 0 Feb 18 16:50:47.473: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:50:48.418: INFO: Number of nodes with available pods: 0 Feb 18 16:50:48.419: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:50:49.663: INFO: Number of nodes with available pods: 0 Feb 18 16:50:49.663: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:50:51.228: INFO: Number of nodes with available pods: 0 Feb 18 16:50:51.229: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:50:51.411: INFO: Number of nodes with available pods: 0 Feb 18 16:50:51.411: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:50:52.415: INFO: Number of nodes with available pods: 0 Feb 18 16:50:52.415: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:50:53.419: INFO: Number of nodes with available pods: 0 Feb 18 16:50:53.419: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:50:54.413: INFO: Number of nodes with available pods: 1 Feb 18 16:50:54.413: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 18 16:50:55.446: INFO: Number of nodes with available pods: 2 Feb 18 16:50:55.446: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 18 16:50:55.485: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:50:55.485: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:50:56.522: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:50:56.522: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:50:57.826: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:50:57.826: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:50:58.518: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:50:58.518: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:50:59.519: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:50:59.519: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:00.520: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:00.520: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:01.516: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:01.516: INFO: Pod daemon-set-fcr22 is not available Feb 18 16:51:01.516: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:02.518: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:02.519: INFO: Pod daemon-set-fcr22 is not available Feb 18 16:51:02.519: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:03.516: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:03.516: INFO: Pod daemon-set-fcr22 is not available Feb 18 16:51:03.516: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:04.517: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:04.517: INFO: Pod daemon-set-fcr22 is not available Feb 18 16:51:04.517: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:05.516: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:05.516: INFO: Pod daemon-set-fcr22 is not available Feb 18 16:51:05.516: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:06.515: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:06.515: INFO: Pod daemon-set-fcr22 is not available Feb 18 16:51:06.515: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:07.517: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:07.518: INFO: Pod daemon-set-fcr22 is not available Feb 18 16:51:07.518: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:08.521: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:08.522: INFO: Pod daemon-set-fcr22 is not available Feb 18 16:51:08.522: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:09.517: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:09.517: INFO: Pod daemon-set-fcr22 is not available Feb 18 16:51:09.517: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:10.523: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:10.523: INFO: Pod daemon-set-fcr22 is not available Feb 18 16:51:10.523: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:11.516: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:11.516: INFO: Pod daemon-set-fcr22 is not available Feb 18 16:51:11.516: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:12.518: INFO: Wrong image for pod: daemon-set-fcr22. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:12.518: INFO: Pod daemon-set-fcr22 is not available Feb 18 16:51:12.518: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:13.518: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:13.518: INFO: Pod daemon-set-p2qpl is not available Feb 18 16:51:14.602: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:14.602: INFO: Pod daemon-set-p2qpl is not available Feb 18 16:51:15.517: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:15.517: INFO: Pod daemon-set-p2qpl is not available Feb 18 16:51:16.522: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:16.522: INFO: Pod daemon-set-p2qpl is not available Feb 18 16:51:17.515: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:17.515: INFO: Pod daemon-set-p2qpl is not available Feb 18 16:51:18.522: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:18.523: INFO: Pod daemon-set-p2qpl is not available Feb 18 16:51:19.583: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:19.583: INFO: Pod daemon-set-p2qpl is not available Feb 18 16:51:20.520: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:21.519: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:22.517: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:23.542: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:24.516: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:24.516: INFO: Pod daemon-set-gc6dn is not available Feb 18 16:51:25.515: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:25.515: INFO: Pod daemon-set-gc6dn is not available Feb 18 16:51:26.518: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:26.518: INFO: Pod daemon-set-gc6dn is not available Feb 18 16:51:27.514: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:27.514: INFO: Pod daemon-set-gc6dn is not available Feb 18 16:51:28.518: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:28.518: INFO: Pod daemon-set-gc6dn is not available Feb 18 16:51:29.516: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:29.516: INFO: Pod daemon-set-gc6dn is not available Feb 18 16:51:30.518: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:30.518: INFO: Pod daemon-set-gc6dn is not available Feb 18 16:51:31.517: INFO: Wrong image for pod: daemon-set-gc6dn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 18 16:51:31.517: INFO: Pod daemon-set-gc6dn is not available Feb 18 16:51:32.517: INFO: Pod daemon-set-jpcg6 is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 18 16:51:32.531: INFO: Number of nodes with available pods: 1 Feb 18 16:51:32.531: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:51:33.546: INFO: Number of nodes with available pods: 1 Feb 18 16:51:33.547: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:51:34.546: INFO: Number of nodes with available pods: 1 Feb 18 16:51:34.546: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:51:35.561: INFO: Number of nodes with available pods: 1 Feb 18 16:51:35.561: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:51:36.551: INFO: Number of nodes with available pods: 1 Feb 18 16:51:36.551: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:51:37.552: INFO: Number of nodes with available pods: 1 Feb 18 16:51:37.552: INFO: Node jerma-node is running more than one daemon pod Feb 18 16:51:38.549: INFO: Number of nodes with available pods: 2 Feb 18 16:51:38.549: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3514, will wait for the garbage collector to delete the pods Feb 18 16:51:38.643: INFO: Deleting DaemonSet.extensions daemon-set took: 14.291057ms Feb 18 16:51:39.044: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.037241ms Feb 18 16:51:53.148: INFO: Number of nodes with available pods: 0 Feb 18 16:51:53.148: INFO: Number of running nodes: 0, number of available pods: 0 Feb 18 16:51:53.151: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3514/daemonsets","resourceVersion":"9216123"},"items":null} Feb 18 16:51:53.153: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3514/pods","resourceVersion":"9216123"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:51:53.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3514" for this suite. • [SLOW TEST:71.046 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":280,"completed":148,"skipped":2298,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:51:53.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:51:53.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2873" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":280,"completed":149,"skipped":2305,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:51:53.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 18 16:51:53.309: INFO: Waiting up to 5m0s for pod "pod-cda9fd6c-ecf3-4098-b00b-c028d7294d1b" in namespace "emptydir-7708" to be "success or failure" Feb 18 16:51:53.324: INFO: Pod "pod-cda9fd6c-ecf3-4098-b00b-c028d7294d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.198666ms Feb 18 16:51:55.331: INFO: Pod "pod-cda9fd6c-ecf3-4098-b00b-c028d7294d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021312462s Feb 18 16:51:57.338: INFO: Pod "pod-cda9fd6c-ecf3-4098-b00b-c028d7294d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02918158s Feb 18 16:51:59.347: INFO: Pod "pod-cda9fd6c-ecf3-4098-b00b-c028d7294d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037929694s Feb 18 16:52:01.355: INFO: Pod "pod-cda9fd6c-ecf3-4098-b00b-c028d7294d1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045357662s STEP: Saw pod success Feb 18 16:52:01.355: INFO: Pod "pod-cda9fd6c-ecf3-4098-b00b-c028d7294d1b" satisfied condition "success or failure" Feb 18 16:52:01.360: INFO: Trying to get logs from node jerma-node pod pod-cda9fd6c-ecf3-4098-b00b-c028d7294d1b container test-container: STEP: delete the pod Feb 18 16:52:01.655: INFO: Waiting for pod pod-cda9fd6c-ecf3-4098-b00b-c028d7294d1b to disappear Feb 18 16:52:01.673: INFO: Pod pod-cda9fd6c-ecf3-4098-b00b-c028d7294d1b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:52:01.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7708" for this suite. • [SLOW TEST:8.456 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":150,"skipped":2346,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:52:01.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 18 16:52:02.542: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 18 16:52:04.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641522, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641522, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641522, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641522, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:52:06.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641522, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641522, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641522, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641522, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:52:08.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641522, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641522, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641522, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641522, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 16:52:11.649: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:52:14.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-212" for this suite. STEP: Destroying namespace "webhook-212-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.921 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":280,"completed":151,"skipped":2360,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:52:14.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-45b6546c-0d18-49a7-a571-5a07ab0f3072 STEP: Creating a pod to test consume secrets Feb 18 16:52:15.068: INFO: Waiting up to 5m0s for pod "pod-secrets-e7b415c7-8cbc-4d0c-bc6d-f9efafeae416" in namespace "secrets-1530" to be "success or failure" Feb 18 16:52:15.261: INFO: Pod "pod-secrets-e7b415c7-8cbc-4d0c-bc6d-f9efafeae416": Phase="Pending", Reason="", readiness=false. Elapsed: 192.976358ms Feb 18 16:52:17.266: INFO: Pod "pod-secrets-e7b415c7-8cbc-4d0c-bc6d-f9efafeae416": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197926295s Feb 18 16:52:19.279: INFO: Pod "pod-secrets-e7b415c7-8cbc-4d0c-bc6d-f9efafeae416": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211216395s Feb 18 16:52:21.288: INFO: Pod "pod-secrets-e7b415c7-8cbc-4d0c-bc6d-f9efafeae416": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219340274s Feb 18 16:52:23.297: INFO: Pod "pod-secrets-e7b415c7-8cbc-4d0c-bc6d-f9efafeae416": Phase="Pending", Reason="", readiness=false. Elapsed: 8.229120774s Feb 18 16:52:25.305: INFO: Pod "pod-secrets-e7b415c7-8cbc-4d0c-bc6d-f9efafeae416": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.236812294s STEP: Saw pod success Feb 18 16:52:25.305: INFO: Pod "pod-secrets-e7b415c7-8cbc-4d0c-bc6d-f9efafeae416" satisfied condition "success or failure" Feb 18 16:52:25.309: INFO: Trying to get logs from node jerma-node pod pod-secrets-e7b415c7-8cbc-4d0c-bc6d-f9efafeae416 container secret-volume-test: STEP: delete the pod Feb 18 16:52:25.410: INFO: Waiting for pod pod-secrets-e7b415c7-8cbc-4d0c-bc6d-f9efafeae416 to disappear Feb 18 16:52:25.414: INFO: Pod pod-secrets-e7b415c7-8cbc-4d0c-bc6d-f9efafeae416 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:52:25.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1530" for this suite. STEP: Destroying namespace "secret-namespace-2553" for this suite. • [SLOW TEST:10.825 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":280,"completed":152,"skipped":2360,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:52:25.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 18 16:52:25.566: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9f59747-35d4-4ead-b150-5fcf8e8fe9b0" in namespace "downward-api-3734" to be "success or failure" Feb 18 16:52:25.588: INFO: Pod "downwardapi-volume-c9f59747-35d4-4ead-b150-5fcf8e8fe9b0": Phase="Pending", Reason="", readiness=false. Elapsed: 21.744951ms Feb 18 16:52:27.599: INFO: Pod "downwardapi-volume-c9f59747-35d4-4ead-b150-5fcf8e8fe9b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033097867s Feb 18 16:52:29.608: INFO: Pod "downwardapi-volume-c9f59747-35d4-4ead-b150-5fcf8e8fe9b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04118184s Feb 18 16:52:31.616: INFO: Pod "downwardapi-volume-c9f59747-35d4-4ead-b150-5fcf8e8fe9b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049963826s Feb 18 16:52:33.626: INFO: Pod "downwardapi-volume-c9f59747-35d4-4ead-b150-5fcf8e8fe9b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059416164s STEP: Saw pod success Feb 18 16:52:33.626: INFO: Pod "downwardapi-volume-c9f59747-35d4-4ead-b150-5fcf8e8fe9b0" satisfied condition "success or failure" Feb 18 16:52:33.631: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c9f59747-35d4-4ead-b150-5fcf8e8fe9b0 container client-container: STEP: delete the pod Feb 18 16:52:33.719: INFO: Waiting for pod downwardapi-volume-c9f59747-35d4-4ead-b150-5fcf8e8fe9b0 to disappear Feb 18 16:52:33.725: INFO: Pod downwardapi-volume-c9f59747-35d4-4ead-b150-5fcf8e8fe9b0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:52:33.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3734" for this suite. • [SLOW TEST:8.302 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":153,"skipped":2360,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:52:33.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-62 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating stateful set ss in namespace statefulset-62 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-62 Feb 18 16:52:33.981: INFO: Found 0 stateful pods, waiting for 1 Feb 18 16:52:43.990: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 18 16:52:43.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 18 16:52:47.784: INFO: stderr: "I0218 16:52:47.422091 2431 log.go:172] (0xc0007cca50) (0xc00054d540) Create stream\nI0218 16:52:47.422166 2431 log.go:172] (0xc0007cca50) (0xc00054d540) Stream added, broadcasting: 1\nI0218 16:52:47.427133 2431 log.go:172] (0xc0007cca50) Reply frame received for 1\nI0218 16:52:47.427190 2431 log.go:172] (0xc0007cca50) (0xc0008940a0) Create stream\nI0218 16:52:47.427201 2431 log.go:172] (0xc0007cca50) (0xc0008940a0) Stream added, broadcasting: 3\nI0218 16:52:47.431782 2431 log.go:172] (0xc0007cca50) Reply frame received for 3\nI0218 16:52:47.432006 2431 log.go:172] (0xc0007cca50) (0xc000894140) Create stream\nI0218 16:52:47.432051 2431 log.go:172] (0xc0007cca50) (0xc000894140) Stream added, broadcasting: 5\nI0218 16:52:47.435212 2431 log.go:172] (0xc0007cca50) Reply frame received for 5\nI0218 16:52:47.530716 2431 log.go:172] (0xc0007cca50) Data frame received for 5\nI0218 16:52:47.530824 2431 log.go:172] (0xc000894140) (5) Data frame handling\nI0218 16:52:47.530855 2431 log.go:172] (0xc000894140) (5) Data frame sent\nI0218 16:52:47.530868 2431 log.go:172] (0xc0007cca50) Data frame received for 5\nI0218 16:52:47.530874 2431 log.go:172] (0xc000894140) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 16:52:47.530987 2431 log.go:172] (0xc000894140) (5) Data frame sent\nI0218 16:52:47.651744 2431 log.go:172] (0xc0007cca50) Data frame received for 3\nI0218 16:52:47.651834 2431 log.go:172] (0xc0008940a0) (3) Data frame handling\nI0218 16:52:47.651907 2431 log.go:172] (0xc0008940a0) (3) Data frame sent\nI0218 16:52:47.768414 2431 log.go:172] (0xc0007cca50) (0xc0008940a0) Stream removed, broadcasting: 3\nI0218 16:52:47.768627 2431 log.go:172] (0xc0007cca50) Data frame received for 1\nI0218 16:52:47.768655 2431 log.go:172] (0xc00054d540) (1) Data frame handling\nI0218 16:52:47.768685 2431 log.go:172] (0xc00054d540) (1) Data frame sent\nI0218 16:52:47.768700 2431 log.go:172] (0xc0007cca50) (0xc00054d540) Stream removed, broadcasting: 1\nI0218 16:52:47.768968 2431 log.go:172] (0xc0007cca50) (0xc000894140) Stream removed, broadcasting: 5\nI0218 16:52:47.769378 2431 log.go:172] (0xc0007cca50) Go away received\nI0218 16:52:47.769772 2431 log.go:172] (0xc0007cca50) (0xc00054d540) Stream removed, broadcasting: 1\nI0218 16:52:47.769817 2431 log.go:172] (0xc0007cca50) (0xc0008940a0) Stream removed, broadcasting: 3\nI0218 16:52:47.769834 2431 log.go:172] (0xc0007cca50) (0xc000894140) Stream removed, broadcasting: 5\n" Feb 18 16:52:47.785: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 18 16:52:47.785: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 18 16:52:47.795: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 18 16:52:57.805: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 18 16:52:57.805: INFO: Waiting for statefulset status.replicas updated to 0 Feb 18 16:52:57.889: INFO: POD NODE PHASE GRACE CONDITIONS Feb 18 16:52:57.889: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:34 +0000 UTC }] Feb 18 16:52:57.890: INFO: Feb 18 16:52:57.890: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 18 16:52:58.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.934304308s Feb 18 16:53:00.086: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.926299435s Feb 18 16:53:01.425: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.738158794s Feb 18 16:53:02.430: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.399562192s Feb 18 16:53:04.261: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.394547439s Feb 18 16:53:06.044: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.562711967s Feb 18 16:53:07.107: INFO: Verifying statefulset ss doesn't scale past 3 for another 780.218056ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-62 Feb 18 16:53:08.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:53:08.662: INFO: stderr: "I0218 16:53:08.380636 2463 log.go:172] (0xc0007ee790) (0xc0007ea280) Create stream\nI0218 16:53:08.380888 2463 log.go:172] (0xc0007ee790) (0xc0007ea280) Stream added, broadcasting: 1\nI0218 16:53:08.385269 2463 log.go:172] (0xc0007ee790) Reply frame received for 1\nI0218 16:53:08.385334 2463 log.go:172] (0xc0007ee790) (0xc0005d2820) Create stream\nI0218 16:53:08.385346 2463 log.go:172] (0xc0007ee790) (0xc0005d2820) Stream added, broadcasting: 3\nI0218 16:53:08.386825 2463 log.go:172] (0xc0007ee790) Reply frame received for 3\nI0218 16:53:08.386844 2463 log.go:172] (0xc0007ee790) (0xc0007ea320) Create stream\nI0218 16:53:08.386856 2463 log.go:172] (0xc0007ee790) (0xc0007ea320) Stream added, broadcasting: 5\nI0218 16:53:08.389066 2463 log.go:172] (0xc0007ee790) Reply frame received for 5\nI0218 16:53:08.529544 2463 log.go:172] (0xc0007ee790) Data frame received for 5\nI0218 16:53:08.530160 2463 log.go:172] (0xc0007ea320) (5) Data frame handling\nI0218 16:53:08.530232 2463 log.go:172] (0xc0007ea320) (5) Data frame sent\nI0218 16:53:08.530352 2463 log.go:172] (0xc0007ee790) Data frame received for 3\nI0218 16:53:08.530396 2463 log.go:172] (0xc0005d2820) (3) Data frame handling\nI0218 16:53:08.530432 2463 log.go:172] (0xc0005d2820) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 16:53:08.648170 2463 log.go:172] (0xc0007ee790) Data frame received for 1\nI0218 16:53:08.648251 2463 log.go:172] (0xc0007ee790) (0xc0005d2820) Stream removed, broadcasting: 3\nI0218 16:53:08.648340 2463 log.go:172] (0xc0007ea280) (1) Data frame handling\nI0218 16:53:08.648375 2463 log.go:172] (0xc0007ea280) (1) Data frame sent\nI0218 16:53:08.648393 2463 log.go:172] (0xc0007ee790) (0xc0007ea320) Stream removed, broadcasting: 5\nI0218 16:53:08.648437 2463 log.go:172] (0xc0007ee790) (0xc0007ea280) Stream removed, broadcasting: 1\nI0218 16:53:08.648455 2463 log.go:172] (0xc0007ee790) Go away received\nI0218 16:53:08.649249 2463 log.go:172] (0xc0007ee790) (0xc0007ea280) Stream removed, broadcasting: 1\nI0218 16:53:08.649258 2463 log.go:172] (0xc0007ee790) (0xc0005d2820) Stream removed, broadcasting: 3\nI0218 16:53:08.649261 2463 log.go:172] (0xc0007ee790) (0xc0007ea320) Stream removed, broadcasting: 5\n" Feb 18 16:53:08.662: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 18 16:53:08.662: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 18 16:53:08.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:53:09.058: INFO: stderr: "I0218 16:53:08.822862 2484 log.go:172] (0xc0000f4370) (0xc0002b34a0) Create stream\nI0218 16:53:08.823052 2484 log.go:172] (0xc0000f4370) (0xc0002b34a0) Stream added, broadcasting: 1\nI0218 16:53:08.829137 2484 log.go:172] (0xc0000f4370) Reply frame received for 1\nI0218 16:53:08.829250 2484 log.go:172] (0xc0000f4370) (0xc00098c000) Create stream\nI0218 16:53:08.829273 2484 log.go:172] (0xc0000f4370) (0xc00098c000) Stream added, broadcasting: 3\nI0218 16:53:08.830430 2484 log.go:172] (0xc0000f4370) Reply frame received for 3\nI0218 16:53:08.830496 2484 log.go:172] (0xc0000f4370) (0xc0009d8000) Create stream\nI0218 16:53:08.830532 2484 log.go:172] (0xc0000f4370) (0xc0009d8000) Stream added, broadcasting: 5\nI0218 16:53:08.834392 2484 log.go:172] (0xc0000f4370) Reply frame received for 5\nI0218 16:53:08.925968 2484 log.go:172] (0xc0000f4370) Data frame received for 5\nI0218 16:53:08.926038 2484 log.go:172] (0xc0009d8000) (5) Data frame handling\nI0218 16:53:08.926072 2484 log.go:172] (0xc0009d8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 16:53:08.948690 2484 log.go:172] (0xc0000f4370) Data frame received for 5\nI0218 16:53:08.948733 2484 log.go:172] (0xc0009d8000) (5) Data frame handling\nI0218 16:53:08.948750 2484 log.go:172] (0xc0009d8000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0218 16:53:08.948762 2484 log.go:172] (0xc0000f4370) Data frame received for 3\nI0218 16:53:08.948823 2484 log.go:172] (0xc00098c000) (3) Data frame handling\nI0218 16:53:08.948845 2484 log.go:172] (0xc00098c000) (3) Data frame sent\nI0218 16:53:09.049493 2484 log.go:172] (0xc0000f4370) Data frame received for 1\nI0218 16:53:09.049541 2484 log.go:172] (0xc0000f4370) (0xc00098c000) Stream removed, broadcasting: 3\nI0218 16:53:09.049638 2484 log.go:172] (0xc0002b34a0) (1) Data frame handling\nI0218 16:53:09.049661 2484 log.go:172] (0xc0000f4370) (0xc0009d8000) Stream removed, broadcasting: 5\nI0218 16:53:09.049700 2484 log.go:172] (0xc0002b34a0) (1) Data frame sent\nI0218 16:53:09.049712 2484 log.go:172] (0xc0000f4370) (0xc0002b34a0) Stream removed, broadcasting: 1\nI0218 16:53:09.049732 2484 log.go:172] (0xc0000f4370) Go away received\nI0218 16:53:09.050413 2484 log.go:172] (0xc0000f4370) (0xc0002b34a0) Stream removed, broadcasting: 1\nI0218 16:53:09.050427 2484 log.go:172] (0xc0000f4370) (0xc00098c000) Stream removed, broadcasting: 3\nI0218 16:53:09.050433 2484 log.go:172] (0xc0000f4370) (0xc0009d8000) Stream removed, broadcasting: 5\n" Feb 18 16:53:09.059: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 18 16:53:09.059: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 18 16:53:09.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:53:09.418: INFO: stderr: "I0218 16:53:09.239763 2504 log.go:172] (0xc0009213f0) (0xc0008ce5a0) Create stream\nI0218 16:53:09.239949 2504 log.go:172] (0xc0009213f0) (0xc0008ce5a0) Stream added, broadcasting: 1\nI0218 16:53:09.243051 2504 log.go:172] (0xc0009213f0) Reply frame received for 1\nI0218 16:53:09.243088 2504 log.go:172] (0xc0009213f0) (0xc0008ce640) Create stream\nI0218 16:53:09.243095 2504 log.go:172] (0xc0009213f0) (0xc0008ce640) Stream added, broadcasting: 3\nI0218 16:53:09.244195 2504 log.go:172] (0xc0009213f0) Reply frame received for 3\nI0218 16:53:09.244221 2504 log.go:172] (0xc0009213f0) (0xc0009c2000) Create stream\nI0218 16:53:09.244254 2504 log.go:172] (0xc0009213f0) (0xc0009c2000) Stream added, broadcasting: 5\nI0218 16:53:09.245612 2504 log.go:172] (0xc0009213f0) Reply frame received for 5\nI0218 16:53:09.318824 2504 log.go:172] (0xc0009213f0) Data frame received for 5\nI0218 16:53:09.318887 2504 log.go:172] (0xc0009c2000) (5) Data frame handling\nI0218 16:53:09.318916 2504 log.go:172] (0xc0009c2000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 16:53:09.319098 2504 log.go:172] (0xc0009213f0) Data frame received for 3\nI0218 16:53:09.319114 2504 log.go:172] (0xc0008ce640) (3) Data frame handling\nI0218 16:53:09.319123 2504 log.go:172] (0xc0008ce640) (3) Data frame sent\nI0218 16:53:09.319154 2504 log.go:172] (0xc0009213f0) Data frame received for 5\nI0218 16:53:09.319160 2504 log.go:172] (0xc0009c2000) (5) Data frame handling\nI0218 16:53:09.319166 2504 log.go:172] (0xc0009c2000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0218 16:53:09.319268 2504 log.go:172] (0xc0009213f0) Data frame received for 5\nI0218 16:53:09.319295 2504 log.go:172] (0xc0009c2000) (5) Data frame handling\nI0218 16:53:09.319312 2504 log.go:172] (0xc0009c2000) (5) Data frame sent\n+ true\nI0218 16:53:09.403365 2504 log.go:172] (0xc0009213f0) (0xc0008ce640) Stream removed, broadcasting: 3\nI0218 16:53:09.403689 2504 log.go:172] (0xc0009213f0) Data frame received for 1\nI0218 16:53:09.403734 2504 log.go:172] (0xc0008ce5a0) (1) Data frame handling\nI0218 16:53:09.403784 2504 log.go:172] (0xc0008ce5a0) (1) Data frame sent\nI0218 16:53:09.403825 2504 log.go:172] (0xc0009213f0) (0xc0008ce5a0) Stream removed, broadcasting: 1\nI0218 16:53:09.404124 2504 log.go:172] (0xc0009213f0) (0xc0009c2000) Stream removed, broadcasting: 5\nI0218 16:53:09.404644 2504 log.go:172] (0xc0009213f0) Go away received\nI0218 16:53:09.405417 2504 log.go:172] (0xc0009213f0) (0xc0008ce5a0) Stream removed, broadcasting: 1\nI0218 16:53:09.405489 2504 log.go:172] (0xc0009213f0) (0xc0008ce640) Stream removed, broadcasting: 3\nI0218 16:53:09.405498 2504 log.go:172] (0xc0009213f0) (0xc0009c2000) Stream removed, broadcasting: 5\n" Feb 18 16:53:09.418: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 18 16:53:09.418: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 18 16:53:09.425: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 18 16:53:09.425: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 18 16:53:09.425: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 18 16:53:09.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 18 16:53:09.715: INFO: stderr: "I0218 16:53:09.580010 2524 log.go:172] (0xc0001066e0) (0xc000683f40) Create stream\nI0218 16:53:09.580136 2524 log.go:172] (0xc0001066e0) (0xc000683f40) Stream added, broadcasting: 1\nI0218 16:53:09.583783 2524 log.go:172] (0xc0001066e0) Reply frame received for 1\nI0218 16:53:09.583855 2524 log.go:172] (0xc0001066e0) (0xc0005f48c0) Create stream\nI0218 16:53:09.583875 2524 log.go:172] (0xc0001066e0) (0xc0005f48c0) Stream added, broadcasting: 3\nI0218 16:53:09.584988 2524 log.go:172] (0xc0001066e0) Reply frame received for 3\nI0218 16:53:09.585023 2524 log.go:172] (0xc0001066e0) (0xc0003fb540) Create stream\nI0218 16:53:09.585035 2524 log.go:172] (0xc0001066e0) (0xc0003fb540) Stream added, broadcasting: 5\nI0218 16:53:09.587231 2524 log.go:172] (0xc0001066e0) Reply frame received for 5\nI0218 16:53:09.643465 2524 log.go:172] (0xc0001066e0) Data frame received for 5\nI0218 16:53:09.643582 2524 log.go:172] (0xc0003fb540) (5) Data frame handling\nI0218 16:53:09.643624 2524 log.go:172] (0xc0003fb540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 16:53:09.644944 2524 log.go:172] (0xc0001066e0) Data frame received for 3\nI0218 16:53:09.644969 2524 log.go:172] (0xc0005f48c0) (3) Data frame handling\nI0218 16:53:09.644997 2524 log.go:172] (0xc0005f48c0) (3) Data frame sent\nI0218 16:53:09.704841 2524 log.go:172] (0xc0001066e0) Data frame received for 1\nI0218 16:53:09.704895 2524 log.go:172] (0xc0001066e0) (0xc0005f48c0) Stream removed, broadcasting: 3\nI0218 16:53:09.704941 2524 log.go:172] (0xc000683f40) (1) Data frame handling\nI0218 16:53:09.704960 2524 log.go:172] (0xc000683f40) (1) Data frame sent\nI0218 16:53:09.704971 2524 log.go:172] (0xc0001066e0) (0xc000683f40) Stream removed, broadcasting: 1\nI0218 16:53:09.705571 2524 log.go:172] (0xc0001066e0) (0xc0003fb540) Stream removed, broadcasting: 5\nI0218 16:53:09.705609 2524 log.go:172] (0xc0001066e0) (0xc000683f40) Stream removed, broadcasting: 1\nI0218 16:53:09.705626 2524 log.go:172] (0xc0001066e0) (0xc0005f48c0) Stream removed, broadcasting: 3\nI0218 16:53:09.705639 2524 log.go:172] (0xc0001066e0) (0xc0003fb540) Stream removed, broadcasting: 5\nI0218 16:53:09.705738 2524 log.go:172] (0xc0001066e0) Go away received\n" Feb 18 16:53:09.716: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 18 16:53:09.716: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 18 16:53:09.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 18 16:53:10.124: INFO: stderr: "I0218 16:53:09.881138 2546 log.go:172] (0xc000a6e420) (0xc000ac83c0) Create stream\nI0218 16:53:09.881252 2546 log.go:172] (0xc000a6e420) (0xc000ac83c0) Stream added, broadcasting: 1\nI0218 16:53:09.884994 2546 log.go:172] (0xc000a6e420) Reply frame received for 1\nI0218 16:53:09.885168 2546 log.go:172] (0xc000a6e420) (0xc000ac8460) Create stream\nI0218 16:53:09.885203 2546 log.go:172] (0xc000a6e420) (0xc000ac8460) Stream added, broadcasting: 3\nI0218 16:53:09.886977 2546 log.go:172] (0xc000a6e420) Reply frame received for 3\nI0218 16:53:09.887053 2546 log.go:172] (0xc000a6e420) (0xc000ac8500) Create stream\nI0218 16:53:09.887082 2546 log.go:172] (0xc000a6e420) (0xc000ac8500) Stream added, broadcasting: 5\nI0218 16:53:09.888927 2546 log.go:172] (0xc000a6e420) Reply frame received for 5\nI0218 16:53:09.995482 2546 log.go:172] (0xc000a6e420) Data frame received for 5\nI0218 16:53:09.995541 2546 log.go:172] (0xc000ac8500) (5) Data frame handling\nI0218 16:53:09.995558 2546 log.go:172] (0xc000ac8500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 16:53:10.023944 2546 log.go:172] (0xc000a6e420) Data frame received for 3\nI0218 16:53:10.023971 2546 log.go:172] (0xc000ac8460) (3) Data frame handling\nI0218 16:53:10.023989 2546 log.go:172] (0xc000ac8460) (3) Data frame sent\nI0218 16:53:10.111120 2546 log.go:172] (0xc000a6e420) Data frame received for 1\nI0218 16:53:10.111201 2546 log.go:172] (0xc000ac83c0) (1) Data frame handling\nI0218 16:53:10.111250 2546 log.go:172] (0xc000ac83c0) (1) Data frame sent\nI0218 16:53:10.111296 2546 log.go:172] (0xc000a6e420) (0xc000ac83c0) Stream removed, broadcasting: 1\nI0218 16:53:10.111405 2546 log.go:172] (0xc000a6e420) (0xc000ac8500) Stream removed, broadcasting: 5\nI0218 16:53:10.111539 2546 log.go:172] (0xc000a6e420) (0xc000ac8460) Stream removed, broadcasting: 3\nI0218 16:53:10.111834 2546 log.go:172] (0xc000a6e420) Go away received\nI0218 16:53:10.112881 2546 log.go:172] (0xc000a6e420) (0xc000ac83c0) Stream removed, broadcasting: 1\nI0218 16:53:10.112928 2546 log.go:172] (0xc000a6e420) (0xc000ac8460) Stream removed, broadcasting: 3\nI0218 16:53:10.112942 2546 log.go:172] (0xc000a6e420) (0xc000ac8500) Stream removed, broadcasting: 5\n" Feb 18 16:53:10.125: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 18 16:53:10.125: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 18 16:53:10.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 18 16:53:10.472: INFO: stderr: "I0218 16:53:10.272953 2566 log.go:172] (0xc0009f6580) (0xc0008ee0a0) Create stream\nI0218 16:53:10.273054 2566 log.go:172] (0xc0009f6580) (0xc0008ee0a0) Stream added, broadcasting: 1\nI0218 16:53:10.276075 2566 log.go:172] (0xc0009f6580) Reply frame received for 1\nI0218 16:53:10.276113 2566 log.go:172] (0xc0009f6580) (0xc0009d6000) Create stream\nI0218 16:53:10.276123 2566 log.go:172] (0xc0009f6580) (0xc0009d6000) Stream added, broadcasting: 3\nI0218 16:53:10.277272 2566 log.go:172] (0xc0009f6580) Reply frame received for 3\nI0218 16:53:10.277301 2566 log.go:172] (0xc0009f6580) (0xc0008ee140) Create stream\nI0218 16:53:10.277307 2566 log.go:172] (0xc0009f6580) (0xc0008ee140) Stream added, broadcasting: 5\nI0218 16:53:10.278441 2566 log.go:172] (0xc0009f6580) Reply frame received for 5\nI0218 16:53:10.340944 2566 log.go:172] (0xc0009f6580) Data frame received for 5\nI0218 16:53:10.340978 2566 log.go:172] (0xc0008ee140) (5) Data frame handling\nI0218 16:53:10.341000 2566 log.go:172] (0xc0008ee140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 16:53:10.382027 2566 log.go:172] (0xc0009f6580) Data frame received for 3\nI0218 16:53:10.382102 2566 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0218 16:53:10.382145 2566 log.go:172] (0xc0009d6000) (3) Data frame sent\nI0218 16:53:10.457313 2566 log.go:172] (0xc0009f6580) Data frame received for 1\nI0218 16:53:10.457436 2566 log.go:172] (0xc0009f6580) (0xc0009d6000) Stream removed, broadcasting: 3\nI0218 16:53:10.457493 2566 log.go:172] (0xc0008ee0a0) (1) Data frame handling\nI0218 16:53:10.457511 2566 log.go:172] (0xc0008ee0a0) (1) Data frame sent\nI0218 16:53:10.457570 2566 log.go:172] (0xc0009f6580) (0xc0008ee140) Stream removed, broadcasting: 5\nI0218 16:53:10.457607 2566 log.go:172] (0xc0009f6580) (0xc0008ee0a0) Stream removed, broadcasting: 1\nI0218 16:53:10.457628 2566 log.go:172] (0xc0009f6580) Go away received\nI0218 16:53:10.458701 2566 log.go:172] (0xc0009f6580) (0xc0008ee0a0) Stream removed, broadcasting: 1\nI0218 16:53:10.458721 2566 log.go:172] (0xc0009f6580) (0xc0009d6000) Stream removed, broadcasting: 3\nI0218 16:53:10.458739 2566 log.go:172] (0xc0009f6580) (0xc0008ee140) Stream removed, broadcasting: 5\n" Feb 18 16:53:10.472: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 18 16:53:10.472: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 18 16:53:10.472: INFO: Waiting for statefulset status.replicas updated to 0 Feb 18 16:53:10.482: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 18 16:53:20.506: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 18 16:53:20.507: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 18 16:53:20.507: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 18 16:53:20.526: INFO: POD NODE PHASE GRACE CONDITIONS Feb 18 16:53:20.526: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:34 +0000 UTC }] Feb 18 16:53:20.527: INFO: ss-1 jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:20.527: INFO: ss-2 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:20.527: INFO: Feb 18 16:53:20.527: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 18 16:53:22.725: INFO: POD NODE PHASE GRACE CONDITIONS Feb 18 16:53:22.726: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:34 +0000 UTC }] Feb 18 16:53:22.726: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:22.726: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:22.726: INFO: Feb 18 16:53:22.726: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 18 16:53:23.736: INFO: POD NODE PHASE GRACE CONDITIONS Feb 18 16:53:23.736: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:34 +0000 UTC }] Feb 18 16:53:23.736: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:23.736: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:23.736: INFO: Feb 18 16:53:23.736: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 18 16:53:24.775: INFO: POD NODE PHASE GRACE CONDITIONS Feb 18 16:53:24.776: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:34 +0000 UTC }] Feb 18 16:53:24.776: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:24.776: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:24.776: INFO: Feb 18 16:53:24.776: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 18 16:53:25.786: INFO: POD NODE PHASE GRACE CONDITIONS Feb 18 16:53:25.786: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:34 +0000 UTC }] Feb 18 16:53:25.786: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:25.786: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:25.786: INFO: Feb 18 16:53:25.786: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 18 16:53:26.799: INFO: POD NODE PHASE GRACE CONDITIONS Feb 18 16:53:26.799: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:34 +0000 UTC }] Feb 18 16:53:26.800: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:26.800: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:26.800: INFO: Feb 18 16:53:26.800: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 18 16:53:27.808: INFO: POD NODE PHASE GRACE CONDITIONS Feb 18 16:53:27.808: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:27.809: INFO: Feb 18 16:53:27.809: INFO: StatefulSet ss has not reached scale 0, at 1 Feb 18 16:53:28.815: INFO: POD NODE PHASE GRACE CONDITIONS Feb 18 16:53:28.816: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:28.816: INFO: Feb 18 16:53:28.816: INFO: StatefulSet ss has not reached scale 0, at 1 Feb 18 16:53:29.825: INFO: POD NODE PHASE GRACE CONDITIONS Feb 18 16:53:29.826: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:53:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-18 16:52:57 +0000 UTC }] Feb 18 16:53:29.826: INFO: Feb 18 16:53:29.826: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-62 Feb 18 16:53:30.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:53:31.047: INFO: rc: 1 Feb 18 16:53:31.047: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 18 16:53:41.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:53:41.216: INFO: rc: 1 Feb 18 16:53:41.216: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:53:51.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:53:51.382: INFO: rc: 1 Feb 18 16:53:51.383: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:54:01.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:54:01.554: INFO: rc: 1 Feb 18 16:54:01.554: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:54:11.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:54:11.726: INFO: rc: 1 Feb 18 16:54:11.726: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:54:21.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:54:21.915: INFO: rc: 1 Feb 18 16:54:21.915: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:54:31.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:54:32.077: INFO: rc: 1 Feb 18 16:54:32.078: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:54:42.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:54:42.288: INFO: rc: 1 Feb 18 16:54:42.288: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:54:52.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:54:52.459: INFO: rc: 1 Feb 18 16:54:52.460: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:55:02.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:55:02.594: INFO: rc: 1 Feb 18 16:55:02.595: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:55:12.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:55:12.741: INFO: rc: 1 Feb 18 16:55:12.741: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:55:22.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:55:22.921: INFO: rc: 1 Feb 18 16:55:22.921: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:55:32.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:55:33.077: INFO: rc: 1 Feb 18 16:55:33.077: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:55:43.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:55:43.211: INFO: rc: 1 Feb 18 16:55:43.211: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:55:53.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:55:53.346: INFO: rc: 1 Feb 18 16:55:53.346: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:56:03.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:56:03.531: INFO: rc: 1 Feb 18 16:56:03.531: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:56:13.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:56:13.697: INFO: rc: 1 Feb 18 16:56:13.697: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:56:23.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:56:23.878: INFO: rc: 1 Feb 18 16:56:23.879: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:56:33.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:56:34.177: INFO: rc: 1 Feb 18 16:56:34.177: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:56:44.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:56:44.282: INFO: rc: 1 Feb 18 16:56:44.282: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:56:54.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:56:54.395: INFO: rc: 1 Feb 18 16:56:54.395: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:57:04.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:57:04.557: INFO: rc: 1 Feb 18 16:57:04.557: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:57:14.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:57:14.693: INFO: rc: 1 Feb 18 16:57:14.693: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:57:24.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:57:24.838: INFO: rc: 1 Feb 18 16:57:24.838: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:57:34.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:57:35.024: INFO: rc: 1 Feb 18 16:57:35.025: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:57:45.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:57:45.195: INFO: rc: 1 Feb 18 16:57:45.196: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:57:55.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:57:55.307: INFO: rc: 1 Feb 18 16:57:55.307: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:58:05.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:58:05.522: INFO: rc: 1 Feb 18 16:58:05.522: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:58:15.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:58:15.695: INFO: rc: 1 Feb 18 16:58:15.695: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:58:25.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:58:25.882: INFO: rc: 1 Feb 18 16:58:25.883: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 18 16:58:35.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-62 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 16:58:36.334: INFO: rc: 1 Feb 18 16:58:36.334: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Feb 18 16:58:36.334: INFO: Scaling statefulset ss to 0 Feb 18 16:58:36.348: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 18 16:58:36.351: INFO: Deleting all statefulset in ns statefulset-62 Feb 18 16:58:36.357: INFO: Scaling statefulset ss to 0 Feb 18 16:58:36.371: INFO: Waiting for statefulset status.replicas updated to 0 Feb 18 16:58:36.374: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:58:36.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-62" for this suite. • [SLOW TEST:362.669 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":280,"completed":154,"skipped":2370,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:58:36.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 18 16:58:37.286: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 18 16:58:39.316: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641917, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641917, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641917, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641917, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:58:41.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641917, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641917, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641917, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641917, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 16:58:43.322: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641917, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641917, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641917, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717641917, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 16:58:46.381: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 16:58:46.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9210" for this suite. STEP: Destroying namespace "webhook-9210-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.279 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":280,"completed":155,"skipped":2374,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 16:58:46.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod test-webserver-5063e4af-2577-4ac7-ac85-4b76af5c4c7a in namespace container-probe-9502 Feb 18 16:58:54.966: INFO: Started pod test-webserver-5063e4af-2577-4ac7-ac85-4b76af5c4c7a in namespace container-probe-9502 STEP: checking the pod's current state and verifying that restartCount is present Feb 18 16:58:54.969: INFO: Initial restart count of pod test-webserver-5063e4af-2577-4ac7-ac85-4b76af5c4c7a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:02:56.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9502" for this suite. • [SLOW TEST:249.914 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":156,"skipped":2374,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:02:56.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 18 17:03:14.934: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9686 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 17:03:14.934: INFO: >>> kubeConfig: /root/.kube/config I0218 17:03:14.993610 9 log.go:172] (0xc002ad6000) (0xc000eb0140) Create stream I0218 17:03:14.994001 9 log.go:172] (0xc002ad6000) (0xc000eb0140) Stream added, broadcasting: 1 I0218 17:03:14.997087 9 log.go:172] (0xc002ad6000) Reply frame received for 1 I0218 17:03:14.997160 9 log.go:172] (0xc002ad6000) (0xc000d30140) Create stream I0218 17:03:14.997168 9 log.go:172] (0xc002ad6000) (0xc000d30140) Stream added, broadcasting: 3 I0218 17:03:14.998835 9 log.go:172] (0xc002ad6000) Reply frame received for 3 I0218 17:03:14.998938 9 log.go:172] (0xc002ad6000) (0xc000eb0280) Create stream I0218 17:03:14.998948 9 log.go:172] (0xc002ad6000) (0xc000eb0280) Stream added, broadcasting: 5 I0218 17:03:15.002728 9 log.go:172] (0xc002ad6000) Reply frame received for 5 I0218 17:03:15.089757 9 log.go:172] (0xc002ad6000) Data frame received for 3 I0218 17:03:15.089910 9 log.go:172] (0xc000d30140) (3) Data frame handling I0218 17:03:15.089936 9 log.go:172] (0xc000d30140) (3) Data frame sent I0218 17:03:15.167513 9 log.go:172] (0xc002ad6000) (0xc000d30140) Stream removed, broadcasting: 3 I0218 17:03:15.167757 9 log.go:172] (0xc002ad6000) Data frame received for 1 I0218 17:03:15.167769 9 log.go:172] (0xc000eb0140) (1) Data frame handling I0218 17:03:15.167778 9 log.go:172] (0xc000eb0140) (1) Data frame sent I0218 17:03:15.167786 9 log.go:172] (0xc002ad6000) (0xc000eb0140) Stream removed, broadcasting: 1 I0218 17:03:15.167966 9 log.go:172] (0xc002ad6000) (0xc000eb0280) Stream removed, broadcasting: 5 I0218 17:03:15.167993 9 log.go:172] (0xc002ad6000) (0xc000eb0140) Stream removed, broadcasting: 1 I0218 17:03:15.168008 9 log.go:172] (0xc002ad6000) (0xc000d30140) Stream removed, broadcasting: 3 I0218 17:03:15.168013 9 log.go:172] (0xc002ad6000) (0xc000eb0280) Stream removed, broadcasting: 5 Feb 18 17:03:15.168: INFO: Exec stderr: "" I0218 17:03:15.168618 9 log.go:172] (0xc002ad6000) Go away received Feb 18 17:03:15.168: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9686 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 17:03:15.168: INFO: >>> kubeConfig: /root/.kube/config I0218 17:03:15.248843 9 log.go:172] (0xc001f4c420) (0xc0028eeb40) Create stream I0218 17:03:15.248926 9 log.go:172] (0xc001f4c420) (0xc0028eeb40) Stream added, broadcasting: 1 I0218 17:03:15.253550 9 log.go:172] (0xc001f4c420) Reply frame received for 1 I0218 17:03:15.253590 9 log.go:172] (0xc001f4c420) (0xc001cfa960) Create stream I0218 17:03:15.253607 9 log.go:172] (0xc001f4c420) (0xc001cfa960) Stream added, broadcasting: 3 I0218 17:03:15.255853 9 log.go:172] (0xc001f4c420) Reply frame received for 3 I0218 17:03:15.255882 9 log.go:172] (0xc001f4c420) (0xc000eb03c0) Create stream I0218 17:03:15.255893 9 log.go:172] (0xc001f4c420) (0xc000eb03c0) Stream added, broadcasting: 5 I0218 17:03:15.258053 9 log.go:172] (0xc001f4c420) Reply frame received for 5 I0218 17:03:15.336596 9 log.go:172] (0xc001f4c420) Data frame received for 3 I0218 17:03:15.336706 9 log.go:172] (0xc001cfa960) (3) Data frame handling I0218 17:03:15.336724 9 log.go:172] (0xc001cfa960) (3) Data frame sent I0218 17:03:15.396156 9 log.go:172] (0xc001f4c420) (0xc001cfa960) Stream removed, broadcasting: 3 I0218 17:03:15.396301 9 log.go:172] (0xc001f4c420) Data frame received for 1 I0218 17:03:15.396340 9 log.go:172] (0xc001f4c420) (0xc000eb03c0) Stream removed, broadcasting: 5 I0218 17:03:15.396445 9 log.go:172] (0xc0028eeb40) (1) Data frame handling I0218 17:03:15.396547 9 log.go:172] (0xc0028eeb40) (1) Data frame sent I0218 17:03:15.396584 9 log.go:172] (0xc001f4c420) (0xc0028eeb40) Stream removed, broadcasting: 1 I0218 17:03:15.396615 9 log.go:172] (0xc001f4c420) Go away received I0218 17:03:15.397144 9 log.go:172] (0xc001f4c420) (0xc0028eeb40) Stream removed, broadcasting: 1 I0218 17:03:15.397297 9 log.go:172] (0xc001f4c420) (0xc001cfa960) Stream removed, broadcasting: 3 I0218 17:03:15.397309 9 log.go:172] (0xc001f4c420) (0xc000eb03c0) Stream removed, broadcasting: 5 Feb 18 17:03:15.397: INFO: Exec stderr: "" Feb 18 17:03:15.397: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9686 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 17:03:15.397: INFO: >>> kubeConfig: /root/.kube/config I0218 17:03:15.448161 9 log.go:172] (0xc001ed11e0) (0xc000d30b40) Create stream I0218 17:03:15.448292 9 log.go:172] (0xc001ed11e0) (0xc000d30b40) Stream added, broadcasting: 1 I0218 17:03:15.451366 9 log.go:172] (0xc001ed11e0) Reply frame received for 1 I0218 17:03:15.451388 9 log.go:172] (0xc001ed11e0) (0xc000eb05a0) Create stream I0218 17:03:15.451394 9 log.go:172] (0xc001ed11e0) (0xc000eb05a0) Stream added, broadcasting: 3 I0218 17:03:15.452614 9 log.go:172] (0xc001ed11e0) Reply frame received for 3 I0218 17:03:15.452646 9 log.go:172] (0xc001ed11e0) (0xc002958460) Create stream I0218 17:03:15.452654 9 log.go:172] (0xc001ed11e0) (0xc002958460) Stream added, broadcasting: 5 I0218 17:03:15.453665 9 log.go:172] (0xc001ed11e0) Reply frame received for 5 I0218 17:03:15.534058 9 log.go:172] (0xc001ed11e0) Data frame received for 3 I0218 17:03:15.534152 9 log.go:172] (0xc000eb05a0) (3) Data frame handling I0218 17:03:15.534187 9 log.go:172] (0xc000eb05a0) (3) Data frame sent I0218 17:03:15.606027 9 log.go:172] (0xc001ed11e0) (0xc000eb05a0) Stream removed, broadcasting: 3 I0218 17:03:15.606254 9 log.go:172] (0xc001ed11e0) Data frame received for 1 I0218 17:03:15.606419 9 log.go:172] (0xc001ed11e0) (0xc002958460) Stream removed, broadcasting: 5 I0218 17:03:15.606476 9 log.go:172] (0xc000d30b40) (1) Data frame handling I0218 17:03:15.606513 9 log.go:172] (0xc000d30b40) (1) Data frame sent I0218 17:03:15.606584 9 log.go:172] (0xc001ed11e0) (0xc000d30b40) Stream removed, broadcasting: 1 I0218 17:03:15.606620 9 log.go:172] (0xc001ed11e0) Go away received I0218 17:03:15.606813 9 log.go:172] (0xc001ed11e0) (0xc000d30b40) Stream removed, broadcasting: 1 I0218 17:03:15.606840 9 log.go:172] (0xc001ed11e0) (0xc000eb05a0) Stream removed, broadcasting: 3 I0218 17:03:15.606853 9 log.go:172] (0xc001ed11e0) (0xc002958460) Stream removed, broadcasting: 5 Feb 18 17:03:15.606: INFO: Exec stderr: "" Feb 18 17:03:15.607: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9686 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 17:03:15.607: INFO: >>> kubeConfig: /root/.kube/config I0218 17:03:15.663958 9 log.go:172] (0xc002c24420) (0xc001cfb040) Create stream I0218 17:03:15.664121 9 log.go:172] (0xc002c24420) (0xc001cfb040) Stream added, broadcasting: 1 I0218 17:03:15.667199 9 log.go:172] (0xc002c24420) Reply frame received for 1 I0218 17:03:15.667272 9 log.go:172] (0xc002c24420) (0xc002958640) Create stream I0218 17:03:15.667283 9 log.go:172] (0xc002c24420) (0xc002958640) Stream added, broadcasting: 3 I0218 17:03:15.668429 9 log.go:172] (0xc002c24420) Reply frame received for 3 I0218 17:03:15.668452 9 log.go:172] (0xc002c24420) (0xc000d30c80) Create stream I0218 17:03:15.668458 9 log.go:172] (0xc002c24420) (0xc000d30c80) Stream added, broadcasting: 5 I0218 17:03:15.669465 9 log.go:172] (0xc002c24420) Reply frame received for 5 I0218 17:03:15.720787 9 log.go:172] (0xc002c24420) Data frame received for 3 I0218 17:03:15.720885 9 log.go:172] (0xc002958640) (3) Data frame handling I0218 17:03:15.720912 9 log.go:172] (0xc002958640) (3) Data frame sent I0218 17:03:15.787426 9 log.go:172] (0xc002c24420) Data frame received for 1 I0218 17:03:15.787561 9 log.go:172] (0xc001cfb040) (1) Data frame handling I0218 17:03:15.787594 9 log.go:172] (0xc001cfb040) (1) Data frame sent I0218 17:03:15.788846 9 log.go:172] (0xc002c24420) (0xc001cfb040) Stream removed, broadcasting: 1 I0218 17:03:15.790080 9 log.go:172] (0xc002c24420) (0xc002958640) Stream removed, broadcasting: 3 I0218 17:03:15.790471 9 log.go:172] (0xc002c24420) (0xc000d30c80) Stream removed, broadcasting: 5 I0218 17:03:15.790506 9 log.go:172] (0xc002c24420) Go away received I0218 17:03:15.790587 9 log.go:172] (0xc002c24420) (0xc001cfb040) Stream removed, broadcasting: 1 I0218 17:03:15.790664 9 log.go:172] (0xc002c24420) (0xc002958640) Stream removed, broadcasting: 3 I0218 17:03:15.790691 9 log.go:172] (0xc002c24420) (0xc000d30c80) Stream removed, broadcasting: 5 Feb 18 17:03:15.790: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 18 17:03:15.790: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9686 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 17:03:15.790: INFO: >>> kubeConfig: /root/.kube/config I0218 17:03:15.843744 9 log.go:172] (0xc002c24b00) (0xc001cfb180) Create stream I0218 17:03:15.843887 9 log.go:172] (0xc002c24b00) (0xc001cfb180) Stream added, broadcasting: 1 I0218 17:03:15.847617 9 log.go:172] (0xc002c24b00) Reply frame received for 1 I0218 17:03:15.847652 9 log.go:172] (0xc002c24b00) (0xc001cfb220) Create stream I0218 17:03:15.847662 9 log.go:172] (0xc002c24b00) (0xc001cfb220) Stream added, broadcasting: 3 I0218 17:03:15.849120 9 log.go:172] (0xc002c24b00) Reply frame received for 3 I0218 17:03:15.849143 9 log.go:172] (0xc002c24b00) (0xc000d30e60) Create stream I0218 17:03:15.849155 9 log.go:172] (0xc002c24b00) (0xc000d30e60) Stream added, broadcasting: 5 I0218 17:03:15.850756 9 log.go:172] (0xc002c24b00) Reply frame received for 5 I0218 17:03:15.954593 9 log.go:172] (0xc002c24b00) Data frame received for 3 I0218 17:03:15.954766 9 log.go:172] (0xc001cfb220) (3) Data frame handling I0218 17:03:15.954805 9 log.go:172] (0xc001cfb220) (3) Data frame sent I0218 17:03:16.053320 9 log.go:172] (0xc002c24b00) Data frame received for 1 I0218 17:03:16.053521 9 log.go:172] (0xc002c24b00) (0xc001cfb220) Stream removed, broadcasting: 3 I0218 17:03:16.053571 9 log.go:172] (0xc001cfb180) (1) Data frame handling I0218 17:03:16.053603 9 log.go:172] (0xc001cfb180) (1) Data frame sent I0218 17:03:16.053644 9 log.go:172] (0xc002c24b00) (0xc000d30e60) Stream removed, broadcasting: 5 I0218 17:03:16.053679 9 log.go:172] (0xc002c24b00) (0xc001cfb180) Stream removed, broadcasting: 1 I0218 17:03:16.053692 9 log.go:172] (0xc002c24b00) Go away received I0218 17:03:16.054117 9 log.go:172] (0xc002c24b00) (0xc001cfb180) Stream removed, broadcasting: 1 I0218 17:03:16.054144 9 log.go:172] (0xc002c24b00) (0xc001cfb220) Stream removed, broadcasting: 3 I0218 17:03:16.054150 9 log.go:172] (0xc002c24b00) (0xc000d30e60) Stream removed, broadcasting: 5 Feb 18 17:03:16.054: INFO: Exec stderr: "" Feb 18 17:03:16.054: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9686 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 17:03:16.054: INFO: >>> kubeConfig: /root/.kube/config I0218 17:03:16.102191 9 log.go:172] (0xc002c25130) (0xc001cfb680) Create stream I0218 17:03:16.102289 9 log.go:172] (0xc002c25130) (0xc001cfb680) Stream added, broadcasting: 1 I0218 17:03:16.106128 9 log.go:172] (0xc002c25130) Reply frame received for 1 I0218 17:03:16.106162 9 log.go:172] (0xc002c25130) (0xc000eb06e0) Create stream I0218 17:03:16.106171 9 log.go:172] (0xc002c25130) (0xc000eb06e0) Stream added, broadcasting: 3 I0218 17:03:16.108568 9 log.go:172] (0xc002c25130) Reply frame received for 3 I0218 17:03:16.108602 9 log.go:172] (0xc002c25130) (0xc001cfb7c0) Create stream I0218 17:03:16.108617 9 log.go:172] (0xc002c25130) (0xc001cfb7c0) Stream added, broadcasting: 5 I0218 17:03:16.109965 9 log.go:172] (0xc002c25130) Reply frame received for 5 I0218 17:03:16.221263 9 log.go:172] (0xc002c25130) Data frame received for 3 I0218 17:03:16.221365 9 log.go:172] (0xc000eb06e0) (3) Data frame handling I0218 17:03:16.221380 9 log.go:172] (0xc000eb06e0) (3) Data frame sent I0218 17:03:16.318273 9 log.go:172] (0xc002c25130) (0xc000eb06e0) Stream removed, broadcasting: 3 I0218 17:03:16.318413 9 log.go:172] (0xc002c25130) Data frame received for 1 I0218 17:03:16.318438 9 log.go:172] (0xc001cfb680) (1) Data frame handling I0218 17:03:16.318472 9 log.go:172] (0xc001cfb680) (1) Data frame sent I0218 17:03:16.318486 9 log.go:172] (0xc002c25130) (0xc001cfb680) Stream removed, broadcasting: 1 I0218 17:03:16.318538 9 log.go:172] (0xc002c25130) (0xc001cfb7c0) Stream removed, broadcasting: 5 I0218 17:03:16.318636 9 log.go:172] (0xc002c25130) Go away received I0218 17:03:16.318803 9 log.go:172] (0xc002c25130) (0xc001cfb680) Stream removed, broadcasting: 1 I0218 17:03:16.318825 9 log.go:172] (0xc002c25130) (0xc000eb06e0) Stream removed, broadcasting: 3 I0218 17:03:16.318839 9 log.go:172] (0xc002c25130) (0xc001cfb7c0) Stream removed, broadcasting: 5 Feb 18 17:03:16.318: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 18 17:03:16.319: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9686 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 17:03:16.319: INFO: >>> kubeConfig: /root/.kube/config I0218 17:03:16.356648 9 log.go:172] (0xc001ed1a20) (0xc000d315e0) Create stream I0218 17:03:16.356773 9 log.go:172] (0xc001ed1a20) (0xc000d315e0) Stream added, broadcasting: 1 I0218 17:03:16.359161 9 log.go:172] (0xc001ed1a20) Reply frame received for 1 I0218 17:03:16.359204 9 log.go:172] (0xc001ed1a20) (0xc0029586e0) Create stream I0218 17:03:16.359219 9 log.go:172] (0xc001ed1a20) (0xc0029586e0) Stream added, broadcasting: 3 I0218 17:03:16.360955 9 log.go:172] (0xc001ed1a20) Reply frame received for 3 I0218 17:03:16.360973 9 log.go:172] (0xc001ed1a20) (0xc000d31680) Create stream I0218 17:03:16.360980 9 log.go:172] (0xc001ed1a20) (0xc000d31680) Stream added, broadcasting: 5 I0218 17:03:16.362129 9 log.go:172] (0xc001ed1a20) Reply frame received for 5 I0218 17:03:16.421603 9 log.go:172] (0xc001ed1a20) Data frame received for 3 I0218 17:03:16.421760 9 log.go:172] (0xc0029586e0) (3) Data frame handling I0218 17:03:16.421815 9 log.go:172] (0xc0029586e0) (3) Data frame sent I0218 17:03:16.507121 9 log.go:172] (0xc001ed1a20) (0xc0029586e0) Stream removed, broadcasting: 3 I0218 17:03:16.507333 9 log.go:172] (0xc001ed1a20) Data frame received for 1 I0218 17:03:16.507348 9 log.go:172] (0xc000d315e0) (1) Data frame handling I0218 17:03:16.507361 9 log.go:172] (0xc000d315e0) (1) Data frame sent I0218 17:03:16.507455 9 log.go:172] (0xc001ed1a20) (0xc000d31680) Stream removed, broadcasting: 5 I0218 17:03:16.507495 9 log.go:172] (0xc001ed1a20) (0xc000d315e0) Stream removed, broadcasting: 1 I0218 17:03:16.507524 9 log.go:172] (0xc001ed1a20) Go away received I0218 17:03:16.507674 9 log.go:172] (0xc001ed1a20) (0xc000d315e0) Stream removed, broadcasting: 1 I0218 17:03:16.507704 9 log.go:172] (0xc001ed1a20) (0xc0029586e0) Stream removed, broadcasting: 3 I0218 17:03:16.507728 9 log.go:172] (0xc001ed1a20) (0xc000d31680) Stream removed, broadcasting: 5 Feb 18 17:03:16.507: INFO: Exec stderr: "" Feb 18 17:03:16.507: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9686 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 17:03:16.507: INFO: >>> kubeConfig: /root/.kube/config I0218 17:03:16.544924 9 log.go:172] (0xc002ad66e0) (0xc000eb1400) Create stream I0218 17:03:16.545142 9 log.go:172] (0xc002ad66e0) (0xc000eb1400) Stream added, broadcasting: 1 I0218 17:03:16.548473 9 log.go:172] (0xc002ad66e0) Reply frame received for 1 I0218 17:03:16.548502 9 log.go:172] (0xc002ad66e0) (0xc002958820) Create stream I0218 17:03:16.548514 9 log.go:172] (0xc002ad66e0) (0xc002958820) Stream added, broadcasting: 3 I0218 17:03:16.549516 9 log.go:172] (0xc002ad66e0) Reply frame received for 3 I0218 17:03:16.549567 9 log.go:172] (0xc002ad66e0) (0xc001cfb900) Create stream I0218 17:03:16.549609 9 log.go:172] (0xc002ad66e0) (0xc001cfb900) Stream added, broadcasting: 5 I0218 17:03:16.551206 9 log.go:172] (0xc002ad66e0) Reply frame received for 5 I0218 17:03:16.657902 9 log.go:172] (0xc002ad66e0) Data frame received for 3 I0218 17:03:16.658128 9 log.go:172] (0xc002958820) (3) Data frame handling I0218 17:03:16.658167 9 log.go:172] (0xc002958820) (3) Data frame sent I0218 17:03:16.744181 9 log.go:172] (0xc002ad66e0) Data frame received for 1 I0218 17:03:16.744411 9 log.go:172] (0xc002ad66e0) (0xc002958820) Stream removed, broadcasting: 3 I0218 17:03:16.744496 9 log.go:172] (0xc000eb1400) (1) Data frame handling I0218 17:03:16.744531 9 log.go:172] (0xc000eb1400) (1) Data frame sent I0218 17:03:16.744546 9 log.go:172] (0xc002ad66e0) (0xc000eb1400) Stream removed, broadcasting: 1 I0218 17:03:16.744992 9 log.go:172] (0xc002ad66e0) (0xc001cfb900) Stream removed, broadcasting: 5 I0218 17:03:16.745050 9 log.go:172] (0xc002ad66e0) Go away received I0218 17:03:16.745115 9 log.go:172] (0xc002ad66e0) (0xc000eb1400) Stream removed, broadcasting: 1 I0218 17:03:16.745128 9 log.go:172] (0xc002ad66e0) (0xc002958820) Stream removed, broadcasting: 3 I0218 17:03:16.745148 9 log.go:172] (0xc002ad66e0) (0xc001cfb900) Stream removed, broadcasting: 5 Feb 18 17:03:16.745: INFO: Exec stderr: "" Feb 18 17:03:16.745: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9686 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 17:03:16.746: INFO: >>> kubeConfig: /root/.kube/config I0218 17:03:16.795234 9 log.go:172] (0xc002110160) (0xc000d31860) Create stream I0218 17:03:16.795409 9 log.go:172] (0xc002110160) (0xc000d31860) Stream added, broadcasting: 1 I0218 17:03:16.799115 9 log.go:172] (0xc002110160) Reply frame received for 1 I0218 17:03:16.799153 9 log.go:172] (0xc002110160) (0xc001cfba40) Create stream I0218 17:03:16.799166 9 log.go:172] (0xc002110160) (0xc001cfba40) Stream added, broadcasting: 3 I0218 17:03:16.800633 9 log.go:172] (0xc002110160) Reply frame received for 3 I0218 17:03:16.800774 9 log.go:172] (0xc002110160) (0xc001cfbae0) Create stream I0218 17:03:16.800793 9 log.go:172] (0xc002110160) (0xc001cfbae0) Stream added, broadcasting: 5 I0218 17:03:16.802942 9 log.go:172] (0xc002110160) Reply frame received for 5 I0218 17:03:16.880928 9 log.go:172] (0xc002110160) Data frame received for 3 I0218 17:03:16.881007 9 log.go:172] (0xc001cfba40) (3) Data frame handling I0218 17:03:16.881037 9 log.go:172] (0xc001cfba40) (3) Data frame sent I0218 17:03:16.949385 9 log.go:172] (0xc002110160) (0xc001cfba40) Stream removed, broadcasting: 3 I0218 17:03:16.949586 9 log.go:172] (0xc002110160) Data frame received for 1 I0218 17:03:16.949634 9 log.go:172] (0xc002110160) (0xc001cfbae0) Stream removed, broadcasting: 5 I0218 17:03:16.949652 9 log.go:172] (0xc000d31860) (1) Data frame handling I0218 17:03:16.949667 9 log.go:172] (0xc000d31860) (1) Data frame sent I0218 17:03:16.949676 9 log.go:172] (0xc002110160) (0xc000d31860) Stream removed, broadcasting: 1 I0218 17:03:16.949692 9 log.go:172] (0xc002110160) Go away received I0218 17:03:16.949924 9 log.go:172] (0xc002110160) (0xc000d31860) Stream removed, broadcasting: 1 I0218 17:03:16.949954 9 log.go:172] (0xc002110160) (0xc001cfba40) Stream removed, broadcasting: 3 I0218 17:03:16.949968 9 log.go:172] (0xc002110160) (0xc001cfbae0) Stream removed, broadcasting: 5 Feb 18 17:03:16.949: INFO: Exec stderr: "" Feb 18 17:03:16.950: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9686 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 17:03:16.950: INFO: >>> kubeConfig: /root/.kube/config I0218 17:03:16.989883 9 log.go:172] (0xc0018d6790) (0xc002958d20) Create stream I0218 17:03:16.990036 9 log.go:172] (0xc0018d6790) (0xc002958d20) Stream added, broadcasting: 1 I0218 17:03:16.992900 9 log.go:172] (0xc0018d6790) Reply frame received for 1 I0218 17:03:16.992941 9 log.go:172] (0xc0018d6790) (0xc0028eed20) Create stream I0218 17:03:16.992948 9 log.go:172] (0xc0018d6790) (0xc0028eed20) Stream added, broadcasting: 3 I0218 17:03:16.994446 9 log.go:172] (0xc0018d6790) Reply frame received for 3 I0218 17:03:16.994504 9 log.go:172] (0xc0018d6790) (0xc001cfbb80) Create stream I0218 17:03:16.994517 9 log.go:172] (0xc0018d6790) (0xc001cfbb80) Stream added, broadcasting: 5 I0218 17:03:16.999284 9 log.go:172] (0xc0018d6790) Reply frame received for 5 I0218 17:03:17.068673 9 log.go:172] (0xc0018d6790) Data frame received for 3 I0218 17:03:17.068840 9 log.go:172] (0xc0028eed20) (3) Data frame handling I0218 17:03:17.068898 9 log.go:172] (0xc0028eed20) (3) Data frame sent I0218 17:03:17.159040 9 log.go:172] (0xc0018d6790) Data frame received for 1 I0218 17:03:17.159268 9 log.go:172] (0xc0018d6790) (0xc0028eed20) Stream removed, broadcasting: 3 I0218 17:03:17.159315 9 log.go:172] (0xc002958d20) (1) Data frame handling I0218 17:03:17.159362 9 log.go:172] (0xc002958d20) (1) Data frame sent I0218 17:03:17.159381 9 log.go:172] (0xc0018d6790) (0xc001cfbb80) Stream removed, broadcasting: 5 I0218 17:03:17.159409 9 log.go:172] (0xc0018d6790) (0xc002958d20) Stream removed, broadcasting: 1 I0218 17:03:17.159440 9 log.go:172] (0xc0018d6790) Go away received I0218 17:03:17.159656 9 log.go:172] (0xc0018d6790) (0xc002958d20) Stream removed, broadcasting: 1 I0218 17:03:17.159682 9 log.go:172] (0xc0018d6790) (0xc0028eed20) Stream removed, broadcasting: 3 I0218 17:03:17.159693 9 log.go:172] (0xc0018d6790) (0xc001cfbb80) Stream removed, broadcasting: 5 Feb 18 17:03:17.159: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:03:17.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9686" for this suite. • [SLOW TEST:20.564 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":157,"skipped":2402,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:03:17.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-d05caa73-9c01-4ec2-902c-2153047952fa STEP: Creating a pod to test consume secrets Feb 18 17:03:17.328: INFO: Waiting up to 5m0s for pod "pod-secrets-7be3852a-a21d-4722-8a95-1d01bec4682c" in namespace "secrets-2030" to be "success or failure" Feb 18 17:03:17.393: INFO: Pod "pod-secrets-7be3852a-a21d-4722-8a95-1d01bec4682c": Phase="Pending", Reason="", readiness=false. Elapsed: 64.749776ms Feb 18 17:03:19.480: INFO: Pod "pod-secrets-7be3852a-a21d-4722-8a95-1d01bec4682c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151767114s Feb 18 17:03:21.486: INFO: Pod "pod-secrets-7be3852a-a21d-4722-8a95-1d01bec4682c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157724202s Feb 18 17:03:23.674: INFO: Pod "pod-secrets-7be3852a-a21d-4722-8a95-1d01bec4682c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.346234487s Feb 18 17:03:25.737: INFO: Pod "pod-secrets-7be3852a-a21d-4722-8a95-1d01bec4682c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.409328231s Feb 18 17:03:27.748: INFO: Pod "pod-secrets-7be3852a-a21d-4722-8a95-1d01bec4682c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.419939136s STEP: Saw pod success Feb 18 17:03:27.748: INFO: Pod "pod-secrets-7be3852a-a21d-4722-8a95-1d01bec4682c" satisfied condition "success or failure" Feb 18 17:03:27.752: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-secrets-7be3852a-a21d-4722-8a95-1d01bec4682c container secret-volume-test: STEP: delete the pod Feb 18 17:03:28.372: INFO: Waiting for pod pod-secrets-7be3852a-a21d-4722-8a95-1d01bec4682c to disappear Feb 18 17:03:28.420: INFO: Pod pod-secrets-7be3852a-a21d-4722-8a95-1d01bec4682c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:03:28.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2030" for this suite. • [SLOW TEST:11.288 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":158,"skipped":2411,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:03:28.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token STEP: reading a file in the container Feb 18 17:03:37.240: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9884 pod-service-account-b3ed3072-4235-4d10-a14c-2db712460952 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 18 17:03:39.434: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9884 pod-service-account-b3ed3072-4235-4d10-a14c-2db712460952 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 18 17:03:39.775: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9884 pod-service-account-b3ed3072-4235-4d10-a14c-2db712460952 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:03:40.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9884" for this suite. • [SLOW TEST:11.691 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":280,"completed":159,"skipped":2411,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:03:40.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 18 17:03:40.979: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 18 17:03:43.001: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642221, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642221, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642221, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642220, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 17:03:45.009: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642221, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642221, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642221, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642220, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 17:03:47.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642221, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642221, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642221, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642220, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 17:03:50.549: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 17:03:50.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-322-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:03:52.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4027" for this suite. STEP: Destroying namespace "webhook-4027-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.764 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":280,"completed":160,"skipped":2441,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:03:52.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Feb 18 17:03:53.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4382' Feb 18 17:03:53.585: INFO: stderr: "" Feb 18 17:03:53.586: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 18 17:03:54.898: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:03:54.898: INFO: Found 0 / 1 Feb 18 17:03:55.600: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:03:55.601: INFO: Found 0 / 1 Feb 18 17:03:56.669: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:03:56.669: INFO: Found 0 / 1 Feb 18 17:03:57.609: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:03:57.609: INFO: Found 0 / 1 Feb 18 17:03:58.591: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:03:58.591: INFO: Found 0 / 1 Feb 18 17:04:00.608: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:04:00.609: INFO: Found 0 / 1 Feb 18 17:04:01.870: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:04:01.871: INFO: Found 0 / 1 Feb 18 17:04:02.596: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:04:02.596: INFO: Found 0 / 1 Feb 18 17:04:03.596: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:04:03.596: INFO: Found 1 / 1 Feb 18 17:04:03.596: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 18 17:04:03.601: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:04:03.601: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 18 17:04:03.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-js7w2 --namespace=kubectl-4382 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 18 17:04:03.759: INFO: stderr: "" Feb 18 17:04:03.759: INFO: stdout: "pod/agnhost-master-js7w2 patched\n" STEP: checking annotations Feb 18 17:04:03.791: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:04:03.791: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:04:03.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4382" for this suite. • [SLOW TEST:10.891 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":280,"completed":161,"skipped":2447,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:04:03.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 18 17:04:03.982: INFO: PodSpec: initContainers in spec.initContainers Feb 18 17:04:56.704: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0db6e53b-d45b-4342-951b-ea7545a8cfe4", GenerateName:"", Namespace:"init-container-1941", SelfLink:"/api/v1/namespaces/init-container-1941/pods/pod-init-0db6e53b-d45b-4342-951b-ea7545a8cfe4", UID:"ae08ba84-41fe-4faa-b564-e8a0a94d37ff", ResourceVersion:"9218614", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717642243, loc:(*time.Location)(0x7e52ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"982751185"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6qvv5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004cf8000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6qvv5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6qvv5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6qvv5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000534a48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004818000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000534d20)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000534d80)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000534d88), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000534d8c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642244, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642244, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642244, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642243, loc:(*time.Location)(0x7e52ca0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc003732040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0024ee070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0024ee150)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://ec10e799e5f27dffaa7267130a00e106688455ede4e69c8087ff5577b4404830", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003732080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003732060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc000535a4f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:04:56.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1941" for this suite. • [SLOW TEST:52.965 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":280,"completed":162,"skipped":2447,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:04:56.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 18 17:04:56.914: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4731 /api/v1/namespaces/watch-4731/configmaps/e2e-watch-test-label-changed e4240942-9752-4c02-9970-4ed8af91103d 9218622 0 2020-02-18 17:04:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 18 17:04:56.914: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4731 /api/v1/namespaces/watch-4731/configmaps/e2e-watch-test-label-changed e4240942-9752-4c02-9970-4ed8af91103d 9218623 0 2020-02-18 17:04:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 18 17:04:56.914: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4731 /api/v1/namespaces/watch-4731/configmaps/e2e-watch-test-label-changed e4240942-9752-4c02-9970-4ed8af91103d 9218624 0 2020-02-18 17:04:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 18 17:05:07.006: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4731 /api/v1/namespaces/watch-4731/configmaps/e2e-watch-test-label-changed e4240942-9752-4c02-9970-4ed8af91103d 9218661 0 2020-02-18 17:04:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 18 17:05:07.006: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4731 /api/v1/namespaces/watch-4731/configmaps/e2e-watch-test-label-changed e4240942-9752-4c02-9970-4ed8af91103d 9218662 0 2020-02-18 17:04:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 18 17:05:07.007: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4731 /api/v1/namespaces/watch-4731/configmaps/e2e-watch-test-label-changed e4240942-9752-4c02-9970-4ed8af91103d 9218663 0 2020-02-18 17:04:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:05:07.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4731" for this suite. • [SLOW TEST:10.247 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":280,"completed":163,"skipped":2561,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:05:07.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 18 17:05:07.210: INFO: Waiting up to 5m0s for pod "downward-api-d6da34f8-23f1-47d9-b552-b70bb7e3d305" in namespace "downward-api-1824" to be "success or failure" Feb 18 17:05:07.351: INFO: Pod "downward-api-d6da34f8-23f1-47d9-b552-b70bb7e3d305": Phase="Pending", Reason="", readiness=false. Elapsed: 141.591425ms Feb 18 17:05:09.359: INFO: Pod "downward-api-d6da34f8-23f1-47d9-b552-b70bb7e3d305": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149011279s Feb 18 17:05:11.432: INFO: Pod "downward-api-d6da34f8-23f1-47d9-b552-b70bb7e3d305": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222123122s Feb 18 17:05:13.443: INFO: Pod "downward-api-d6da34f8-23f1-47d9-b552-b70bb7e3d305": Phase="Pending", Reason="", readiness=false. Elapsed: 6.23336404s Feb 18 17:05:15.452: INFO: Pod "downward-api-d6da34f8-23f1-47d9-b552-b70bb7e3d305": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.241623201s STEP: Saw pod success Feb 18 17:05:15.452: INFO: Pod "downward-api-d6da34f8-23f1-47d9-b552-b70bb7e3d305" satisfied condition "success or failure" Feb 18 17:05:15.456: INFO: Trying to get logs from node jerma-node pod downward-api-d6da34f8-23f1-47d9-b552-b70bb7e3d305 container dapi-container: STEP: delete the pod Feb 18 17:05:15.699: INFO: Waiting for pod downward-api-d6da34f8-23f1-47d9-b552-b70bb7e3d305 to disappear Feb 18 17:05:15.707: INFO: Pod downward-api-d6da34f8-23f1-47d9-b552-b70bb7e3d305 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:05:15.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1824" for this suite. • [SLOW TEST:8.695 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":280,"completed":164,"skipped":2593,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:05:15.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 18 17:05:15.832: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 18 17:05:15.854: INFO: Waiting for terminating namespaces to be deleted... Feb 18 17:05:15.858: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 18 17:05:15.868: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 18 17:05:15.868: INFO: Container kube-proxy ready: true, restart count 0 Feb 18 17:05:15.868: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 18 17:05:15.868: INFO: Container weave ready: true, restart count 1 Feb 18 17:05:15.868: INFO: Container weave-npc ready: true, restart count 0 Feb 18 17:05:15.868: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 18 17:05:15.933: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 18 17:05:15.933: INFO: Container kube-scheduler ready: true, restart count 16 Feb 18 17:05:15.933: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 18 17:05:15.933: INFO: Container kube-apiserver ready: true, restart count 1 Feb 18 17:05:15.933: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 18 17:05:15.933: INFO: Container etcd ready: true, restart count 1 Feb 18 17:05:15.933: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 18 17:05:15.933: INFO: Container coredns ready: true, restart count 0 Feb 18 17:05:15.933: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 18 17:05:15.933: INFO: Container coredns ready: true, restart count 0 Feb 18 17:05:15.933: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 18 17:05:15.933: INFO: Container kube-controller-manager ready: true, restart count 12 Feb 18 17:05:15.933: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 18 17:05:15.933: INFO: Container kube-proxy ready: true, restart count 0 Feb 18 17:05:15.933: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 18 17:05:15.933: INFO: Container weave ready: true, restart count 0 Feb 18 17:05:15.933: INFO: Container weave-npc ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: verifying the node has the label node jerma-node STEP: verifying the node has the label node jerma-server-mvvl6gufaqub Feb 18 17:05:16.041: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 18 17:05:16.042: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 18 17:05:16.042: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Feb 18 17:05:16.042: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub Feb 18 17:05:16.042: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub Feb 18 17:05:16.042: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Feb 18 17:05:16.042: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node Feb 18 17:05:16.042: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 18 17:05:16.042: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node Feb 18 17:05:16.042: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub STEP: Starting Pods to consume most of the cluster CPU. Feb 18 17:05:16.042: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node Feb 18 17:05:16.046: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-4715e17d-c4aa-49a4-99a0-7fbf2bfe678f.15f48e03cb367580], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1382/filler-pod-4715e17d-c4aa-49a4-99a0-7fbf2bfe678f to jerma-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-4715e17d-c4aa-49a4-99a0-7fbf2bfe678f.15f48e04aa2b2bb8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-4715e17d-c4aa-49a4-99a0-7fbf2bfe678f.15f48e05bbca3518], Reason = [Created], Message = [Created container filler-pod-4715e17d-c4aa-49a4-99a0-7fbf2bfe678f] STEP: Considering event: Type = [Normal], Name = [filler-pod-4715e17d-c4aa-49a4-99a0-7fbf2bfe678f.15f48e05e4f502ae], Reason = [Started], Message = [Started container filler-pod-4715e17d-c4aa-49a4-99a0-7fbf2bfe678f] STEP: Considering event: Type = [Normal], Name = [filler-pod-b67cd5bb-3499-4cd2-951b-79b06d86dfd3.15f48e03cbbf62fb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1382/filler-pod-b67cd5bb-3499-4cd2-951b-79b06d86dfd3 to jerma-server-mvvl6gufaqub] STEP: Considering event: Type = [Normal], Name = [filler-pod-b67cd5bb-3499-4cd2-951b-79b06d86dfd3.15f48e04e9928d97], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-b67cd5bb-3499-4cd2-951b-79b06d86dfd3.15f48e05df041cf4], Reason = [Created], Message = [Created container filler-pod-b67cd5bb-3499-4cd2-951b-79b06d86dfd3] STEP: Considering event: Type = [Normal], Name = [filler-pod-b67cd5bb-3499-4cd2-951b-79b06d86dfd3.15f48e06049c9e66], Reason = [Started], Message = [Started container filler-pod-b67cd5bb-3499-4cd2-951b-79b06d86dfd3] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f48e06985a4e34], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node jerma-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-server-mvvl6gufaqub STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:05:29.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1382" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:13.563 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":280,"completed":165,"skipped":2593,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:05:29.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:05:39.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-363" for this suite. • [SLOW TEST:10.421 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":166,"skipped":2600,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:05:39.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Feb 18 17:05:39.781: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Feb 18 17:05:53.242: INFO: >>> kubeConfig: /root/.kube/config Feb 18 17:05:56.186: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:06:08.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6488" for this suite. • [SLOW TEST:29.275 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":280,"completed":167,"skipped":2603,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:06:08.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod busybox-29ced2e3-8670-4cf0-922d-6672d2b502fb in namespace container-probe-5280 Feb 18 17:06:17.928: INFO: Started pod busybox-29ced2e3-8670-4cf0-922d-6672d2b502fb in namespace container-probe-5280 STEP: checking the pod's current state and verifying that restartCount is present Feb 18 17:06:17.940: INFO: Initial restart count of pod busybox-29ced2e3-8670-4cf0-922d-6672d2b502fb is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:10:19.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5280" for this suite. • [SLOW TEST:251.021 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":168,"skipped":2640,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:10:20.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 17:10:20.070: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 18 17:10:23.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8737 create -f -' Feb 18 17:10:26.809: INFO: stderr: "" Feb 18 17:10:26.810: INFO: stdout: "e2e-test-crd-publish-openapi-9079-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 18 17:10:26.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8737 delete e2e-test-crd-publish-openapi-9079-crds test-cr' Feb 18 17:10:26.975: INFO: stderr: "" Feb 18 17:10:26.975: INFO: stdout: "e2e-test-crd-publish-openapi-9079-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Feb 18 17:10:26.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8737 apply -f -' Feb 18 17:10:27.432: INFO: stderr: "" Feb 18 17:10:27.432: INFO: stdout: "e2e-test-crd-publish-openapi-9079-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 18 17:10:27.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8737 delete e2e-test-crd-publish-openapi-9079-crds test-cr' Feb 18 17:10:27.576: INFO: stderr: "" Feb 18 17:10:27.576: INFO: stdout: "e2e-test-crd-publish-openapi-9079-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 18 17:10:27.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9079-crds' Feb 18 17:10:27.847: INFO: stderr: "" Feb 18 17:10:27.847: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9079-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:10:31.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8737" for this suite. • [SLOW TEST:11.102 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":280,"completed":169,"skipped":2640,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:10:31.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 18 17:10:40.298: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:10:40.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7385" for this suite. • [SLOW TEST:9.351 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":280,"completed":170,"skipped":2663,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:10:40.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 18 17:10:40.656: INFO: Waiting up to 5m0s for pod "pod-db3c3e31-dd34-4668-a9cd-affa4f1ba7c9" in namespace "emptydir-4972" to be "success or failure" Feb 18 17:10:40.672: INFO: Pod "pod-db3c3e31-dd34-4668-a9cd-affa4f1ba7c9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.57052ms Feb 18 17:10:42.680: INFO: Pod "pod-db3c3e31-dd34-4668-a9cd-affa4f1ba7c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024306682s Feb 18 17:10:44.684: INFO: Pod "pod-db3c3e31-dd34-4668-a9cd-affa4f1ba7c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0287179s Feb 18 17:10:46.721: INFO: Pod "pod-db3c3e31-dd34-4668-a9cd-affa4f1ba7c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065222168s Feb 18 17:10:48.729: INFO: Pod "pod-db3c3e31-dd34-4668-a9cd-affa4f1ba7c9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07366019s Feb 18 17:10:50.741: INFO: Pod "pod-db3c3e31-dd34-4668-a9cd-affa4f1ba7c9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.085319254s Feb 18 17:10:52.750: INFO: Pod "pod-db3c3e31-dd34-4668-a9cd-affa4f1ba7c9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.094730951s Feb 18 17:10:54.997: INFO: Pod "pod-db3c3e31-dd34-4668-a9cd-affa4f1ba7c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.341692878s STEP: Saw pod success Feb 18 17:10:54.998: INFO: Pod "pod-db3c3e31-dd34-4668-a9cd-affa4f1ba7c9" satisfied condition "success or failure" Feb 18 17:10:55.013: INFO: Trying to get logs from node jerma-node pod pod-db3c3e31-dd34-4668-a9cd-affa4f1ba7c9 container test-container: STEP: delete the pod Feb 18 17:10:55.140: INFO: Waiting for pod pod-db3c3e31-dd34-4668-a9cd-affa4f1ba7c9 to disappear Feb 18 17:10:55.164: INFO: Pod pod-db3c3e31-dd34-4668-a9cd-affa4f1ba7c9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:10:55.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4972" for this suite. • [SLOW TEST:14.730 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":171,"skipped":2714,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:10:55.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-72f3950f-b0d8-4aed-8d6f-cabc04f8a578 STEP: Creating a pod to test consume secrets Feb 18 17:10:57.169: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2134473a-fa34-47a4-95f5-63222437bfbf" in namespace "projected-6564" to be "success or failure" Feb 18 17:10:57.231: INFO: Pod "pod-projected-secrets-2134473a-fa34-47a4-95f5-63222437bfbf": Phase="Pending", Reason="", readiness=false. Elapsed: 62.310916ms Feb 18 17:10:59.238: INFO: Pod "pod-projected-secrets-2134473a-fa34-47a4-95f5-63222437bfbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06900949s Feb 18 17:11:01.248: INFO: Pod "pod-projected-secrets-2134473a-fa34-47a4-95f5-63222437bfbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079449221s Feb 18 17:11:03.265: INFO: Pod "pod-projected-secrets-2134473a-fa34-47a4-95f5-63222437bfbf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09607217s Feb 18 17:11:05.275: INFO: Pod "pod-projected-secrets-2134473a-fa34-47a4-95f5-63222437bfbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105533129s STEP: Saw pod success Feb 18 17:11:05.275: INFO: Pod "pod-projected-secrets-2134473a-fa34-47a4-95f5-63222437bfbf" satisfied condition "success or failure" Feb 18 17:11:05.279: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-2134473a-fa34-47a4-95f5-63222437bfbf container projected-secret-volume-test: STEP: delete the pod Feb 18 17:11:05.568: INFO: Waiting for pod pod-projected-secrets-2134473a-fa34-47a4-95f5-63222437bfbf to disappear Feb 18 17:11:05.573: INFO: Pod pod-projected-secrets-2134473a-fa34-47a4-95f5-63222437bfbf no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:11:05.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6564" for this suite. • [SLOW TEST:10.394 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":172,"skipped":2730,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:11:05.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 18 17:11:05.728: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6072 /api/v1/namespaces/watch-6072/configmaps/e2e-watch-test-watch-closed df1f52ab-85d0-431e-ad87-9d88ba33fe85 9219749 0 2020-02-18 17:11:05 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 18 17:11:05.728: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6072 /api/v1/namespaces/watch-6072/configmaps/e2e-watch-test-watch-closed df1f52ab-85d0-431e-ad87-9d88ba33fe85 9219750 0 2020-02-18 17:11:05 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 18 17:11:05.767: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6072 /api/v1/namespaces/watch-6072/configmaps/e2e-watch-test-watch-closed df1f52ab-85d0-431e-ad87-9d88ba33fe85 9219751 0 2020-02-18 17:11:05 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 18 17:11:05.768: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6072 /api/v1/namespaces/watch-6072/configmaps/e2e-watch-test-watch-closed df1f52ab-85d0-431e-ad87-9d88ba33fe85 9219752 0 2020-02-18 17:11:05 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:11:05.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6072" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":280,"completed":173,"skipped":2740,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:11:05.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-secret-m86l STEP: Creating a pod to test atomic-volume-subpath Feb 18 17:11:06.090: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-m86l" in namespace "subpath-4845" to be "success or failure" Feb 18 17:11:06.094: INFO: Pod "pod-subpath-test-secret-m86l": Phase="Pending", Reason="", readiness=false. Elapsed: 3.880762ms Feb 18 17:11:08.102: INFO: Pod "pod-subpath-test-secret-m86l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01212013s Feb 18 17:11:10.110: INFO: Pod "pod-subpath-test-secret-m86l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019270231s Feb 18 17:11:12.118: INFO: Pod "pod-subpath-test-secret-m86l": Phase="Running", Reason="", readiness=true. Elapsed: 6.027965998s Feb 18 17:11:14.124: INFO: Pod "pod-subpath-test-secret-m86l": Phase="Running", Reason="", readiness=true. Elapsed: 8.033530236s Feb 18 17:11:16.134: INFO: Pod "pod-subpath-test-secret-m86l": Phase="Running", Reason="", readiness=true. Elapsed: 10.043219285s Feb 18 17:11:18.169: INFO: Pod "pod-subpath-test-secret-m86l": Phase="Running", Reason="", readiness=true. Elapsed: 12.078675814s Feb 18 17:11:20.176: INFO: Pod "pod-subpath-test-secret-m86l": Phase="Running", Reason="", readiness=true. Elapsed: 14.085655786s Feb 18 17:11:22.184: INFO: Pod "pod-subpath-test-secret-m86l": Phase="Running", Reason="", readiness=true. Elapsed: 16.093682243s Feb 18 17:11:24.189: INFO: Pod "pod-subpath-test-secret-m86l": Phase="Running", Reason="", readiness=true. Elapsed: 18.099013812s Feb 18 17:11:26.198: INFO: Pod "pod-subpath-test-secret-m86l": Phase="Running", Reason="", readiness=true. Elapsed: 20.107626412s Feb 18 17:11:28.208: INFO: Pod "pod-subpath-test-secret-m86l": Phase="Running", Reason="", readiness=true. Elapsed: 22.118004731s Feb 18 17:11:30.216: INFO: Pod "pod-subpath-test-secret-m86l": Phase="Running", Reason="", readiness=true. Elapsed: 24.125678101s Feb 18 17:11:32.232: INFO: Pod "pod-subpath-test-secret-m86l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.142151658s STEP: Saw pod success Feb 18 17:11:32.233: INFO: Pod "pod-subpath-test-secret-m86l" satisfied condition "success or failure" Feb 18 17:11:32.239: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-m86l container test-container-subpath-secret-m86l: STEP: delete the pod Feb 18 17:11:32.282: INFO: Waiting for pod pod-subpath-test-secret-m86l to disappear Feb 18 17:11:32.293: INFO: Pod pod-subpath-test-secret-m86l no longer exists STEP: Deleting pod pod-subpath-test-secret-m86l Feb 18 17:11:32.294: INFO: Deleting pod "pod-subpath-test-secret-m86l" in namespace "subpath-4845" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:11:32.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4845" for this suite. • [SLOW TEST:26.498 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":280,"completed":174,"skipped":2756,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:11:32.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap that has name configmap-test-emptyKey-5ac7322b-0c04-4cb9-b0d1-ff0bd2bf478f [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:11:32.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3881" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":280,"completed":175,"skipped":2765,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:11:32.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75 Feb 18 17:11:32.571: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the sample API server. Feb 18 17:11:32.891: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Feb 18 17:11:35.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642692, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642692, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642693, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642692, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 17:11:37.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642692, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642692, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642693, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642692, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 17:11:39.178: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642692, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642692, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642693, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642692, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 17:11:41.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642692, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642692, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642693, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642692, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 17:11:43.830: INFO: Waited 634.857809ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:11:44.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6085" for this suite. • [SLOW TEST:12.036 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":280,"completed":176,"skipped":2773,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} S ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:11:44.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 18 17:11:44.639: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 18 17:11:49.666: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:11:49.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4398" for this suite. • [SLOW TEST:5.557 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":280,"completed":177,"skipped":2774,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:11:50.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 17:11:50.223: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 18 17:11:53.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6795 create -f -' Feb 18 17:11:57.270: INFO: stderr: "" Feb 18 17:11:57.271: INFO: stdout: "e2e-test-crd-publish-openapi-5910-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Feb 18 17:11:57.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6795 delete e2e-test-crd-publish-openapi-5910-crds test-cr' Feb 18 17:11:57.414: INFO: stderr: "" Feb 18 17:11:57.414: INFO: stdout: "e2e-test-crd-publish-openapi-5910-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Feb 18 17:11:57.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6795 apply -f -' Feb 18 17:11:57.901: INFO: stderr: "" Feb 18 17:11:57.902: INFO: stdout: "e2e-test-crd-publish-openapi-5910-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Feb 18 17:11:57.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6795 delete e2e-test-crd-publish-openapi-5910-crds test-cr' Feb 18 17:11:58.052: INFO: stderr: "" Feb 18 17:11:58.052: INFO: stdout: "e2e-test-crd-publish-openapi-5910-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 18 17:11:58.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5910-crds' Feb 18 17:11:58.434: INFO: stderr: "" Feb 18 17:11:58.434: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5910-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:12:01.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6795" for this suite. • [SLOW TEST:11.496 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":280,"completed":178,"skipped":2796,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:12:01.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-92f63d4f-18db-4862-80c4-5a2f130cb75d STEP: Creating a pod to test consume configMaps Feb 18 17:12:02.421: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bdbab21b-d5c5-4d44-a96f-394814e9fe7c" in namespace "projected-1065" to be "success or failure" Feb 18 17:12:02.427: INFO: Pod "pod-projected-configmaps-bdbab21b-d5c5-4d44-a96f-394814e9fe7c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.599737ms Feb 18 17:12:04.435: INFO: Pod "pod-projected-configmaps-bdbab21b-d5c5-4d44-a96f-394814e9fe7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013830773s Feb 18 17:12:06.443: INFO: Pod "pod-projected-configmaps-bdbab21b-d5c5-4d44-a96f-394814e9fe7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021780398s Feb 18 17:12:08.451: INFO: Pod "pod-projected-configmaps-bdbab21b-d5c5-4d44-a96f-394814e9fe7c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029821656s Feb 18 17:12:10.464: INFO: Pod "pod-projected-configmaps-bdbab21b-d5c5-4d44-a96f-394814e9fe7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042894917s STEP: Saw pod success Feb 18 17:12:10.465: INFO: Pod "pod-projected-configmaps-bdbab21b-d5c5-4d44-a96f-394814e9fe7c" satisfied condition "success or failure" Feb 18 17:12:10.488: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-bdbab21b-d5c5-4d44-a96f-394814e9fe7c container projected-configmap-volume-test: STEP: delete the pod Feb 18 17:12:10.536: INFO: Waiting for pod pod-projected-configmaps-bdbab21b-d5c5-4d44-a96f-394814e9fe7c to disappear Feb 18 17:12:10.545: INFO: Pod pod-projected-configmaps-bdbab21b-d5c5-4d44-a96f-394814e9fe7c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:12:10.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1065" for this suite. • [SLOW TEST:9.021 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":179,"skipped":2822,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:12:10.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 18 17:12:11.005: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 18 17:12:13.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642731, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642731, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642731, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642730, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 17:12:15.023: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642731, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642731, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642731, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642730, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 18 17:12:17.056: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642731, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642731, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642731, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717642730, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 18 17:12:20.080: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 17:12:20.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:12:21.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-857" for this suite. STEP: Destroying namespace "webhook-857-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:11.054 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":280,"completed":180,"skipped":2856,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:12:21.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 18 17:12:21.724: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41e1c8c4-23db-4727-9a5f-c2d8f163324c" in namespace "downward-api-4691" to be "success or failure" Feb 18 17:12:21.763: INFO: Pod "downwardapi-volume-41e1c8c4-23db-4727-9a5f-c2d8f163324c": Phase="Pending", Reason="", readiness=false. Elapsed: 39.652074ms Feb 18 17:12:23.774: INFO: Pod "downwardapi-volume-41e1c8c4-23db-4727-9a5f-c2d8f163324c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050416868s Feb 18 17:12:25.784: INFO: Pod "downwardapi-volume-41e1c8c4-23db-4727-9a5f-c2d8f163324c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059862202s Feb 18 17:12:27.794: INFO: Pod "downwardapi-volume-41e1c8c4-23db-4727-9a5f-c2d8f163324c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070618229s Feb 18 17:12:29.801: INFO: Pod "downwardapi-volume-41e1c8c4-23db-4727-9a5f-c2d8f163324c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077433736s Feb 18 17:12:31.814: INFO: Pod "downwardapi-volume-41e1c8c4-23db-4727-9a5f-c2d8f163324c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090021835s STEP: Saw pod success Feb 18 17:12:31.814: INFO: Pod "downwardapi-volume-41e1c8c4-23db-4727-9a5f-c2d8f163324c" satisfied condition "success or failure" Feb 18 17:12:31.823: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-41e1c8c4-23db-4727-9a5f-c2d8f163324c container client-container: STEP: delete the pod Feb 18 17:12:31.948: INFO: Waiting for pod downwardapi-volume-41e1c8c4-23db-4727-9a5f-c2d8f163324c to disappear Feb 18 17:12:31.989: INFO: Pod downwardapi-volume-41e1c8c4-23db-4727-9a5f-c2d8f163324c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:12:31.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4691" for this suite. • [SLOW TEST:10.464 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":280,"completed":181,"skipped":2862,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:12:32.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: validating api versions Feb 18 17:12:32.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 18 17:12:32.627: INFO: stderr: "" Feb 18 17:12:32.627: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:12:32.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-660" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":280,"completed":182,"skipped":2901,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:12:32.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the initial replication controller Feb 18 17:12:32.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2747' Feb 18 17:12:33.208: INFO: stderr: "" Feb 18 17:12:33.209: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 18 17:12:33.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2747' Feb 18 17:12:33.362: INFO: stderr: "" Feb 18 17:12:33.362: INFO: stdout: "update-demo-nautilus-4g7wm update-demo-nautilus-wclqw " Feb 18 17:12:33.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g7wm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2747' Feb 18 17:12:33.502: INFO: stderr: "" Feb 18 17:12:33.502: INFO: stdout: "" Feb 18 17:12:33.502: INFO: update-demo-nautilus-4g7wm is created but not running Feb 18 17:12:38.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2747' Feb 18 17:12:38.662: INFO: stderr: "" Feb 18 17:12:38.662: INFO: stdout: "update-demo-nautilus-4g7wm update-demo-nautilus-wclqw " Feb 18 17:12:38.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g7wm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2747' Feb 18 17:12:38.827: INFO: stderr: "" Feb 18 17:12:38.827: INFO: stdout: "" Feb 18 17:12:38.827: INFO: update-demo-nautilus-4g7wm is created but not running Feb 18 17:12:43.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2747' Feb 18 17:12:44.003: INFO: stderr: "" Feb 18 17:12:44.004: INFO: stdout: "update-demo-nautilus-4g7wm update-demo-nautilus-wclqw " Feb 18 17:12:44.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g7wm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2747' Feb 18 17:12:44.184: INFO: stderr: "" Feb 18 17:12:44.184: INFO: stdout: "true" Feb 18 17:12:44.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4g7wm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2747' Feb 18 17:12:44.279: INFO: stderr: "" Feb 18 17:12:44.279: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 18 17:12:44.279: INFO: validating pod update-demo-nautilus-4g7wm Feb 18 17:12:44.393: INFO: got data: { "image": "nautilus.jpg" } Feb 18 17:12:44.393: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 18 17:12:44.393: INFO: update-demo-nautilus-4g7wm is verified up and running Feb 18 17:12:44.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wclqw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2747' Feb 18 17:12:44.604: INFO: stderr: "" Feb 18 17:12:44.605: INFO: stdout: "true" Feb 18 17:12:44.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wclqw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2747' Feb 18 17:12:44.720: INFO: stderr: "" Feb 18 17:12:44.721: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 18 17:12:44.721: INFO: validating pod update-demo-nautilus-wclqw Feb 18 17:12:44.742: INFO: got data: { "image": "nautilus.jpg" } Feb 18 17:12:44.742: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 18 17:12:44.742: INFO: update-demo-nautilus-wclqw is verified up and running STEP: rolling-update to new replication controller Feb 18 17:12:44.745: INFO: scanned /root for discovery docs: Feb 18 17:12:44.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2747' Feb 18 17:13:15.201: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 18 17:13:15.202: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 18 17:13:15.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2747' Feb 18 17:13:15.403: INFO: stderr: "" Feb 18 17:13:15.403: INFO: stdout: "update-demo-kitten-2jzll update-demo-kitten-j25c7 " Feb 18 17:13:15.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2jzll -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2747' Feb 18 17:13:15.498: INFO: stderr: "" Feb 18 17:13:15.498: INFO: stdout: "true" Feb 18 17:13:15.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2jzll -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2747' Feb 18 17:13:15.581: INFO: stderr: "" Feb 18 17:13:15.581: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 18 17:13:15.581: INFO: validating pod update-demo-kitten-2jzll Feb 18 17:13:15.596: INFO: got data: { "image": "kitten.jpg" } Feb 18 17:13:15.596: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 18 17:13:15.596: INFO: update-demo-kitten-2jzll is verified up and running Feb 18 17:13:15.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j25c7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2747' Feb 18 17:13:15.698: INFO: stderr: "" Feb 18 17:13:15.698: INFO: stdout: "true" Feb 18 17:13:15.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j25c7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2747' Feb 18 17:13:15.824: INFO: stderr: "" Feb 18 17:13:15.824: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 18 17:13:15.824: INFO: validating pod update-demo-kitten-j25c7 Feb 18 17:13:15.843: INFO: got data: { "image": "kitten.jpg" } Feb 18 17:13:15.843: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 18 17:13:15.843: INFO: update-demo-kitten-j25c7 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:13:15.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2747" for this suite. • [SLOW TEST:43.213 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":280,"completed":183,"skipped":2919,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:13:15.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:13:15.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8774" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":280,"completed":184,"skipped":2932,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:13:15.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:13:27.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8852" for this suite. • [SLOW TEST:11.180 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":280,"completed":185,"skipped":2949,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:13:27.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 18 17:13:27.257: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3562 /api/v1/namespaces/watch-3562/configmaps/e2e-watch-test-configmap-a 2693e3a7-8438-4366-b7c8-95075433aaee 9220536 0 2020-02-18 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 18 17:13:27.257: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3562 /api/v1/namespaces/watch-3562/configmaps/e2e-watch-test-configmap-a 2693e3a7-8438-4366-b7c8-95075433aaee 9220536 0 2020-02-18 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 18 17:13:37.270: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3562 /api/v1/namespaces/watch-3562/configmaps/e2e-watch-test-configmap-a 2693e3a7-8438-4366-b7c8-95075433aaee 9220580 0 2020-02-18 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 18 17:13:37.270: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3562 /api/v1/namespaces/watch-3562/configmaps/e2e-watch-test-configmap-a 2693e3a7-8438-4366-b7c8-95075433aaee 9220580 0 2020-02-18 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 18 17:13:47.285: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3562 /api/v1/namespaces/watch-3562/configmaps/e2e-watch-test-configmap-a 2693e3a7-8438-4366-b7c8-95075433aaee 9220606 0 2020-02-18 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 18 17:13:47.285: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3562 /api/v1/namespaces/watch-3562/configmaps/e2e-watch-test-configmap-a 2693e3a7-8438-4366-b7c8-95075433aaee 9220606 0 2020-02-18 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 18 17:13:57.298: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3562 /api/v1/namespaces/watch-3562/configmaps/e2e-watch-test-configmap-a 2693e3a7-8438-4366-b7c8-95075433aaee 9220632 0 2020-02-18 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 18 17:13:57.298: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3562 /api/v1/namespaces/watch-3562/configmaps/e2e-watch-test-configmap-a 2693e3a7-8438-4366-b7c8-95075433aaee 9220632 0 2020-02-18 17:13:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 18 17:14:07.313: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3562 /api/v1/namespaces/watch-3562/configmaps/e2e-watch-test-configmap-b 56463a24-50c4-4f1f-8ecc-36c39dc7c72d 9220656 0 2020-02-18 17:14:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 18 17:14:07.314: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3562 /api/v1/namespaces/watch-3562/configmaps/e2e-watch-test-configmap-b 56463a24-50c4-4f1f-8ecc-36c39dc7c72d 9220656 0 2020-02-18 17:14:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 18 17:14:17.338: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3562 /api/v1/namespaces/watch-3562/configmaps/e2e-watch-test-configmap-b 56463a24-50c4-4f1f-8ecc-36c39dc7c72d 9220678 0 2020-02-18 17:14:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 18 17:14:17.339: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3562 /api/v1/namespaces/watch-3562/configmaps/e2e-watch-test-configmap-b 56463a24-50c4-4f1f-8ecc-36c39dc7c72d 9220678 0 2020-02-18 17:14:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:14:27.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3562" for this suite. • [SLOW TEST:60.180 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":280,"completed":186,"skipped":2964,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:14:27.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-7b73bf6a-f7a4-4de4-a712-90009e3ce6f9 STEP: Creating a pod to test consume secrets Feb 18 17:14:27.524: INFO: Waiting up to 5m0s for pod "pod-secrets-19db5e08-54a8-4572-b363-029e6755b72c" in namespace "secrets-7123" to be "success or failure" Feb 18 17:14:27.628: INFO: Pod "pod-secrets-19db5e08-54a8-4572-b363-029e6755b72c": Phase="Pending", Reason="", readiness=false. Elapsed: 104.351922ms Feb 18 17:14:29.640: INFO: Pod "pod-secrets-19db5e08-54a8-4572-b363-029e6755b72c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115743263s Feb 18 17:14:31.648: INFO: Pod "pod-secrets-19db5e08-54a8-4572-b363-029e6755b72c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123839209s Feb 18 17:14:33.658: INFO: Pod "pod-secrets-19db5e08-54a8-4572-b363-029e6755b72c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13422086s STEP: Saw pod success Feb 18 17:14:33.658: INFO: Pod "pod-secrets-19db5e08-54a8-4572-b363-029e6755b72c" satisfied condition "success or failure" Feb 18 17:14:33.667: INFO: Trying to get logs from node jerma-node pod pod-secrets-19db5e08-54a8-4572-b363-029e6755b72c container secret-volume-test: STEP: delete the pod Feb 18 17:14:33.802: INFO: Waiting for pod pod-secrets-19db5e08-54a8-4572-b363-029e6755b72c to disappear Feb 18 17:14:33.825: INFO: Pod pod-secrets-19db5e08-54a8-4572-b363-029e6755b72c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:14:33.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7123" for this suite. • [SLOW TEST:6.505 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":187,"skipped":2970,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:14:33.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-5902 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 18 17:14:34.010: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 18 17:14:34.084: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 18 17:14:36.104: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 18 17:14:38.089: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 18 17:14:42.726: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 18 17:14:45.044: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 18 17:14:46.089: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 17:14:48.103: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 17:14:50.093: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 17:14:52.089: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 17:14:54.092: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 17:14:56.090: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 17:14:58.090: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 18 17:15:00.091: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 18 17:15:00.097: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 18 17:15:08.149: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-5902 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 17:15:08.149: INFO: >>> kubeConfig: /root/.kube/config I0218 17:15:08.199772 9 log.go:172] (0xc001ed1290) (0xc001598f00) Create stream I0218 17:15:08.200006 9 log.go:172] (0xc001ed1290) (0xc001598f00) Stream added, broadcasting: 1 I0218 17:15:08.204388 9 log.go:172] (0xc001ed1290) Reply frame received for 1 I0218 17:15:08.204424 9 log.go:172] (0xc001ed1290) (0xc0026d9040) Create stream I0218 17:15:08.204434 9 log.go:172] (0xc001ed1290) (0xc0026d9040) Stream added, broadcasting: 3 I0218 17:15:08.205708 9 log.go:172] (0xc001ed1290) Reply frame received for 3 I0218 17:15:08.205737 9 log.go:172] (0xc001ed1290) (0xc0015992c0) Create stream I0218 17:15:08.205754 9 log.go:172] (0xc001ed1290) (0xc0015992c0) Stream added, broadcasting: 5 I0218 17:15:08.207304 9 log.go:172] (0xc001ed1290) Reply frame received for 5 I0218 17:15:08.297011 9 log.go:172] (0xc001ed1290) Data frame received for 3 I0218 17:15:08.297127 9 log.go:172] (0xc0026d9040) (3) Data frame handling I0218 17:15:08.297153 9 log.go:172] (0xc0026d9040) (3) Data frame sent I0218 17:15:08.385703 9 log.go:172] (0xc001ed1290) Data frame received for 1 I0218 17:15:08.385797 9 log.go:172] (0xc001ed1290) (0xc0015992c0) Stream removed, broadcasting: 5 I0218 17:15:08.385871 9 log.go:172] (0xc001598f00) (1) Data frame handling I0218 17:15:08.385940 9 log.go:172] (0xc001598f00) (1) Data frame sent I0218 17:15:08.385967 9 log.go:172] (0xc001ed1290) (0xc0026d9040) Stream removed, broadcasting: 3 I0218 17:15:08.386008 9 log.go:172] (0xc001ed1290) (0xc001598f00) Stream removed, broadcasting: 1 I0218 17:15:08.386035 9 log.go:172] (0xc001ed1290) Go away received I0218 17:15:08.386329 9 log.go:172] (0xc001ed1290) (0xc001598f00) Stream removed, broadcasting: 1 I0218 17:15:08.386364 9 log.go:172] (0xc001ed1290) (0xc0026d9040) Stream removed, broadcasting: 3 I0218 17:15:08.386387 9 log.go:172] (0xc001ed1290) (0xc0015992c0) Stream removed, broadcasting: 5 Feb 18 17:15:08.386: INFO: Waiting for responses: map[] Feb 18 17:15:08.393: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-5902 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 18 17:15:08.394: INFO: >>> kubeConfig: /root/.kube/config I0218 17:15:08.441769 9 log.go:172] (0xc0021104d0) (0xc0004f1540) Create stream I0218 17:15:08.441851 9 log.go:172] (0xc0021104d0) (0xc0004f1540) Stream added, broadcasting: 1 I0218 17:15:08.451430 9 log.go:172] (0xc0021104d0) Reply frame received for 1 I0218 17:15:08.451575 9 log.go:172] (0xc0021104d0) (0xc001cfb400) Create stream I0218 17:15:08.451602 9 log.go:172] (0xc0021104d0) (0xc001cfb400) Stream added, broadcasting: 3 I0218 17:15:08.453184 9 log.go:172] (0xc0021104d0) Reply frame received for 3 I0218 17:15:08.453293 9 log.go:172] (0xc0021104d0) (0xc0004f1680) Create stream I0218 17:15:08.453316 9 log.go:172] (0xc0021104d0) (0xc0004f1680) Stream added, broadcasting: 5 I0218 17:15:08.455173 9 log.go:172] (0xc0021104d0) Reply frame received for 5 I0218 17:15:08.551697 9 log.go:172] (0xc0021104d0) Data frame received for 3 I0218 17:15:08.551879 9 log.go:172] (0xc001cfb400) (3) Data frame handling I0218 17:15:08.551912 9 log.go:172] (0xc001cfb400) (3) Data frame sent I0218 17:15:08.674284 9 log.go:172] (0xc0021104d0) (0xc001cfb400) Stream removed, broadcasting: 3 I0218 17:15:08.674890 9 log.go:172] (0xc0021104d0) Data frame received for 1 I0218 17:15:08.675127 9 log.go:172] (0xc0021104d0) (0xc0004f1680) Stream removed, broadcasting: 5 I0218 17:15:08.675284 9 log.go:172] (0xc0004f1540) (1) Data frame handling I0218 17:15:08.675350 9 log.go:172] (0xc0004f1540) (1) Data frame sent I0218 17:15:08.675389 9 log.go:172] (0xc0021104d0) (0xc0004f1540) Stream removed, broadcasting: 1 I0218 17:15:08.675447 9 log.go:172] (0xc0021104d0) Go away received I0218 17:15:08.675961 9 log.go:172] (0xc0021104d0) (0xc0004f1540) Stream removed, broadcasting: 1 I0218 17:15:08.676017 9 log.go:172] (0xc0021104d0) (0xc001cfb400) Stream removed, broadcasting: 3 I0218 17:15:08.676074 9 log.go:172] (0xc0021104d0) (0xc0004f1680) Stream removed, broadcasting: 5 Feb 18 17:15:08.676: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:15:08.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5902" for this suite. • [SLOW TEST:34.838 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":280,"completed":188,"skipped":2972,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:15:08.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Feb 18 17:15:08.779: INFO: namespace kubectl-3797 Feb 18 17:15:08.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3797' Feb 18 17:15:09.157: INFO: stderr: "" Feb 18 17:15:09.157: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 18 17:15:10.165: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:15:10.165: INFO: Found 0 / 1 Feb 18 17:15:11.165: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:15:11.165: INFO: Found 0 / 1 Feb 18 17:15:12.168: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:15:12.169: INFO: Found 0 / 1 Feb 18 17:15:13.166: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:15:13.166: INFO: Found 0 / 1 Feb 18 17:15:14.477: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:15:14.478: INFO: Found 0 / 1 Feb 18 17:15:16.180: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:15:16.180: INFO: Found 0 / 1 Feb 18 17:15:17.163: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:15:17.163: INFO: Found 0 / 1 Feb 18 17:15:18.224: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:15:18.225: INFO: Found 0 / 1 Feb 18 17:15:19.168: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:15:19.168: INFO: Found 0 / 1 Feb 18 17:15:20.163: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:15:20.163: INFO: Found 0 / 1 Feb 18 17:15:21.163: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:15:21.163: INFO: Found 1 / 1 Feb 18 17:15:21.163: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 18 17:15:21.168: INFO: Selector matched 1 pods for map[app:agnhost] Feb 18 17:15:21.168: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 18 17:15:21.168: INFO: wait on agnhost-master startup in kubectl-3797 Feb 18 17:15:21.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-pr4tn agnhost-master --namespace=kubectl-3797' Feb 18 17:15:21.297: INFO: stderr: "" Feb 18 17:15:21.297: INFO: stdout: "Paused\n" STEP: exposing RC Feb 18 17:15:21.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3797' Feb 18 17:15:21.448: INFO: stderr: "" Feb 18 17:15:21.448: INFO: stdout: "service/rm2 exposed\n" Feb 18 17:15:21.453: INFO: Service rm2 in namespace kubectl-3797 found. STEP: exposing service Feb 18 17:15:23.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3797' Feb 18 17:15:23.808: INFO: stderr: "" Feb 18 17:15:23.808: INFO: stdout: "service/rm3 exposed\n" Feb 18 17:15:23.821: INFO: Service rm3 in namespace kubectl-3797 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:15:25.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3797" for this suite. • [SLOW TEST:17.159 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":280,"completed":189,"skipped":2975,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:15:25.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 18 17:15:26.561: INFO: Pod name wrapped-volume-race-23ed5078-e3ac-44e1-9e68-eb703ad55356: Found 0 pods out of 5 Feb 18 17:15:31.573: INFO: Pod name wrapped-volume-race-23ed5078-e3ac-44e1-9e68-eb703ad55356: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-23ed5078-e3ac-44e1-9e68-eb703ad55356 in namespace emptydir-wrapper-9076, will wait for the garbage collector to delete the pods Feb 18 17:16:00.628: INFO: Deleting ReplicationController wrapped-volume-race-23ed5078-e3ac-44e1-9e68-eb703ad55356 took: 13.445084ms Feb 18 17:16:01.029: INFO: Terminating ReplicationController wrapped-volume-race-23ed5078-e3ac-44e1-9e68-eb703ad55356 pods took: 401.304246ms STEP: Creating RC which spawns configmap-volume pods Feb 18 17:16:22.881: INFO: Pod name wrapped-volume-race-44651893-50c0-4108-bad8-dd07c5d66bc3: Found 0 pods out of 5 Feb 18 17:16:30.516: INFO: Pod name wrapped-volume-race-44651893-50c0-4108-bad8-dd07c5d66bc3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-44651893-50c0-4108-bad8-dd07c5d66bc3 in namespace emptydir-wrapper-9076, will wait for the garbage collector to delete the pods Feb 18 17:16:55.019: INFO: Deleting ReplicationController wrapped-volume-race-44651893-50c0-4108-bad8-dd07c5d66bc3 took: 7.783897ms Feb 18 17:16:55.420: INFO: Terminating ReplicationController wrapped-volume-race-44651893-50c0-4108-bad8-dd07c5d66bc3 pods took: 400.40642ms STEP: Creating RC which spawns configmap-volume pods Feb 18 17:17:14.184: INFO: Pod name wrapped-volume-race-5fd6abac-e862-467d-96a1-6594996018fa: Found 0 pods out of 5 Feb 18 17:17:19.213: INFO: Pod name wrapped-volume-race-5fd6abac-e862-467d-96a1-6594996018fa: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5fd6abac-e862-467d-96a1-6594996018fa in namespace emptydir-wrapper-9076, will wait for the garbage collector to delete the pods Feb 18 17:17:49.312: INFO: Deleting ReplicationController wrapped-volume-race-5fd6abac-e862-467d-96a1-6594996018fa took: 12.294267ms Feb 18 17:17:49.714: INFO: Terminating ReplicationController wrapped-volume-race-5fd6abac-e862-467d-96a1-6594996018fa pods took: 401.21299ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:18:05.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9076" for this suite. • [SLOW TEST:159.899 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":280,"completed":190,"skipped":2979,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:18:05.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:18:16.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2386" for this suite. • [SLOW TEST:11.204 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":280,"completed":191,"skipped":3011,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:18:16.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating secret secrets-4824/secret-test-d7fac410-8bc7-46f0-862b-152716a0bffc STEP: Creating a pod to test consume secrets Feb 18 17:18:17.078: INFO: Waiting up to 5m0s for pod "pod-configmaps-3549780e-ca51-4023-a7c9-73e29e49458d" in namespace "secrets-4824" to be "success or failure" Feb 18 17:18:17.098: INFO: Pod "pod-configmaps-3549780e-ca51-4023-a7c9-73e29e49458d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.309609ms Feb 18 17:18:19.105: INFO: Pod "pod-configmaps-3549780e-ca51-4023-a7c9-73e29e49458d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026463217s Feb 18 17:18:21.121: INFO: Pod "pod-configmaps-3549780e-ca51-4023-a7c9-73e29e49458d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04181126s Feb 18 17:18:23.133: INFO: Pod "pod-configmaps-3549780e-ca51-4023-a7c9-73e29e49458d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053846063s Feb 18 17:18:25.141: INFO: Pod "pod-configmaps-3549780e-ca51-4023-a7c9-73e29e49458d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06252645s STEP: Saw pod success Feb 18 17:18:25.142: INFO: Pod "pod-configmaps-3549780e-ca51-4023-a7c9-73e29e49458d" satisfied condition "success or failure" Feb 18 17:18:25.146: INFO: Trying to get logs from node jerma-node pod pod-configmaps-3549780e-ca51-4023-a7c9-73e29e49458d container env-test: STEP: delete the pod Feb 18 17:18:25.280: INFO: Waiting for pod pod-configmaps-3549780e-ca51-4023-a7c9-73e29e49458d to disappear Feb 18 17:18:25.289: INFO: Pod pod-configmaps-3549780e-ca51-4023-a7c9-73e29e49458d no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:18:25.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4824" for this suite. • [SLOW TEST:8.334 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":192,"skipped":3037,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:18:25.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 18 17:18:25.503: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 18 17:18:25.523: INFO: Waiting for terminating namespaces to be deleted... Feb 18 17:18:25.527: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 18 17:18:25.536: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 18 17:18:25.536: INFO: Container kube-proxy ready: true, restart count 0 Feb 18 17:18:25.536: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 18 17:18:25.536: INFO: Container weave ready: true, restart count 1 Feb 18 17:18:25.536: INFO: Container weave-npc ready: true, restart count 0 Feb 18 17:18:25.536: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 18 17:18:25.568: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 18 17:18:25.568: INFO: Container coredns ready: true, restart count 0 Feb 18 17:18:25.568: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 18 17:18:25.568: INFO: Container coredns ready: true, restart count 0 Feb 18 17:18:25.568: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 18 17:18:25.568: INFO: Container weave ready: true, restart count 0 Feb 18 17:18:25.568: INFO: Container weave-npc ready: true, restart count 0 Feb 18 17:18:25.568: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 18 17:18:25.568: INFO: Container kube-controller-manager ready: true, restart count 12 Feb 18 17:18:25.568: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 18 17:18:25.568: INFO: Container kube-proxy ready: true, restart count 0 Feb 18 17:18:25.568: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 18 17:18:25.568: INFO: Container kube-scheduler ready: true, restart count 16 Feb 18 17:18:25.568: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 18 17:18:25.568: INFO: Container kube-apiserver ready: true, restart count 1 Feb 18 17:18:25.568: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 18 17:18:25.568: INFO: Container etcd ready: true, restart count 1 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-de143d1c-067c-430a-9314-ed7e51088932 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-de143d1c-067c-430a-9314-ed7e51088932 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-de143d1c-067c-430a-9314-ed7e51088932 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:18:44.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2713" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:18.732 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":280,"completed":193,"skipped":3044,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:18:44.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 18 17:18:53.320: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:18:53.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8004" for this suite. • [SLOW TEST:9.387 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":194,"skipped":3102,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:18:53.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 17:18:53.669: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Pending, waiting for it to be Running (with Ready = true) Feb 18 17:18:55.677: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Pending, waiting for it to be Running (with Ready = true) Feb 18 17:18:57.678: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Pending, waiting for it to be Running (with Ready = true) Feb 18 17:18:59.676: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Pending, waiting for it to be Running (with Ready = true) Feb 18 17:19:01.692: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Pending, waiting for it to be Running (with Ready = true) Feb 18 17:19:03.677: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Running (Ready = false) Feb 18 17:19:05.677: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Running (Ready = false) Feb 18 17:19:07.676: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Running (Ready = false) Feb 18 17:19:09.677: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Running (Ready = false) Feb 18 17:19:11.676: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Running (Ready = false) Feb 18 17:19:13.678: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Running (Ready = false) Feb 18 17:19:15.677: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Running (Ready = false) Feb 18 17:19:17.680: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Running (Ready = false) Feb 18 17:19:19.677: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Running (Ready = false) Feb 18 17:19:21.685: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Running (Ready = false) Feb 18 17:19:23.680: INFO: The status of Pod test-webserver-a70b8823-3210-47ed-918c-2977aec1302b is Running (Ready = true) Feb 18 17:19:23.686: INFO: Container started at 2020-02-18 17:19:01 +0000 UTC, pod became ready at 2020-02-18 17:19:23 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:19:23.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8387" for this suite. • [SLOW TEST:30.297 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":280,"completed":195,"skipped":3119,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:19:23.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0218 17:19:35.335508 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 18 17:19:35.335: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:19:35.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6633" for this suite. • [SLOW TEST:11.682 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":280,"completed":196,"skipped":3121,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:19:35.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7512 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7512 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7512 Feb 18 17:19:35.586: INFO: Found 0 stateful pods, waiting for 1 Feb 18 17:19:45.593: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 18 17:19:45.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7512 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 18 17:19:46.030: INFO: stderr: "I0218 17:19:45.788432 3936 log.go:172] (0xc000b45c30) (0xc000b3c500) Create stream\nI0218 17:19:45.789012 3936 log.go:172] (0xc000b45c30) (0xc000b3c500) Stream added, broadcasting: 1\nI0218 17:19:45.792825 3936 log.go:172] (0xc000b45c30) Reply frame received for 1\nI0218 17:19:45.792871 3936 log.go:172] (0xc000b45c30) (0xc00092e460) Create stream\nI0218 17:19:45.792886 3936 log.go:172] (0xc000b45c30) (0xc00092e460) Stream added, broadcasting: 3\nI0218 17:19:45.794689 3936 log.go:172] (0xc000b45c30) Reply frame received for 3\nI0218 17:19:45.794727 3936 log.go:172] (0xc000b45c30) (0xc00089e000) Create stream\nI0218 17:19:45.794743 3936 log.go:172] (0xc000b45c30) (0xc00089e000) Stream added, broadcasting: 5\nI0218 17:19:45.803940 3936 log.go:172] (0xc000b45c30) Reply frame received for 5\nI0218 17:19:45.907203 3936 log.go:172] (0xc000b45c30) Data frame received for 5\nI0218 17:19:45.907420 3936 log.go:172] (0xc00089e000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 17:19:45.907448 3936 log.go:172] (0xc00089e000) (5) Data frame sent\nI0218 17:19:45.927676 3936 log.go:172] (0xc000b45c30) Data frame received for 3\nI0218 17:19:45.927701 3936 log.go:172] (0xc00092e460) (3) Data frame handling\nI0218 17:19:45.927721 3936 log.go:172] (0xc00092e460) (3) Data frame sent\nI0218 17:19:46.011611 3936 log.go:172] (0xc000b45c30) Data frame received for 1\nI0218 17:19:46.012264 3936 log.go:172] (0xc000b3c500) (1) Data frame handling\nI0218 17:19:46.012356 3936 log.go:172] (0xc000b3c500) (1) Data frame sent\nI0218 17:19:46.013797 3936 log.go:172] (0xc000b45c30) (0xc000b3c500) Stream removed, broadcasting: 1\nI0218 17:19:46.015573 3936 log.go:172] (0xc000b45c30) (0xc00092e460) Stream removed, broadcasting: 3\nI0218 17:19:46.015635 3936 log.go:172] (0xc000b45c30) (0xc00089e000) Stream removed, broadcasting: 5\nI0218 17:19:46.015699 3936 log.go:172] (0xc000b45c30) (0xc000b3c500) Stream removed, broadcasting: 1\nI0218 17:19:46.015711 3936 log.go:172] (0xc000b45c30) (0xc00092e460) Stream removed, broadcasting: 3\nI0218 17:19:46.015717 3936 log.go:172] (0xc000b45c30) (0xc00089e000) Stream removed, broadcasting: 5\n" Feb 18 17:19:46.031: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 18 17:19:46.031: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 18 17:19:46.042: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 18 17:19:56.049: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 18 17:19:56.049: INFO: Waiting for statefulset status.replicas updated to 0 Feb 18 17:19:56.062: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999583s Feb 18 17:19:57.070: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996197697s Feb 18 17:19:58.075: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.988824795s Feb 18 17:19:59.084: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.982129847s Feb 18 17:20:00.091: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.974205943s Feb 18 17:20:01.098: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.967149573s Feb 18 17:20:02.103: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.960774903s Feb 18 17:20:03.110: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.955412117s Feb 18 17:20:04.116: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.948130388s Feb 18 17:20:05.138: INFO: Verifying statefulset ss doesn't scale past 1 for another 942.460642ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7512 Feb 18 17:20:06.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7512 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 17:20:06.585: INFO: stderr: "I0218 17:20:06.372289 3956 log.go:172] (0xc000a70630) (0xc0004c68c0) Create stream\nI0218 17:20:06.372518 3956 log.go:172] (0xc000a70630) (0xc0004c68c0) Stream added, broadcasting: 1\nI0218 17:20:06.377096 3956 log.go:172] (0xc000a70630) Reply frame received for 1\nI0218 17:20:06.377166 3956 log.go:172] (0xc000a70630) (0xc000a48000) Create stream\nI0218 17:20:06.377179 3956 log.go:172] (0xc000a70630) (0xc000a48000) Stream added, broadcasting: 3\nI0218 17:20:06.378614 3956 log.go:172] (0xc000a70630) Reply frame received for 3\nI0218 17:20:06.378685 3956 log.go:172] (0xc000a70630) (0xc000bf8000) Create stream\nI0218 17:20:06.378711 3956 log.go:172] (0xc000a70630) (0xc000bf8000) Stream added, broadcasting: 5\nI0218 17:20:06.381656 3956 log.go:172] (0xc000a70630) Reply frame received for 5\nI0218 17:20:06.460065 3956 log.go:172] (0xc000a70630) Data frame received for 3\nI0218 17:20:06.460148 3956 log.go:172] (0xc000a48000) (3) Data frame handling\nI0218 17:20:06.460189 3956 log.go:172] (0xc000a48000) (3) Data frame sent\nI0218 17:20:06.460255 3956 log.go:172] (0xc000a70630) Data frame received for 5\nI0218 17:20:06.460311 3956 log.go:172] (0xc000bf8000) (5) Data frame handling\nI0218 17:20:06.460331 3956 log.go:172] (0xc000bf8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 17:20:06.573038 3956 log.go:172] (0xc000a70630) (0xc000a48000) Stream removed, broadcasting: 3\nI0218 17:20:06.573139 3956 log.go:172] (0xc000a70630) Data frame received for 1\nI0218 17:20:06.573177 3956 log.go:172] (0xc0004c68c0) (1) Data frame handling\nI0218 17:20:06.573198 3956 log.go:172] (0xc0004c68c0) (1) Data frame sent\nI0218 17:20:06.573209 3956 log.go:172] (0xc000a70630) (0xc000bf8000) Stream removed, broadcasting: 5\nI0218 17:20:06.573260 3956 log.go:172] (0xc000a70630) (0xc0004c68c0) Stream removed, broadcasting: 1\nI0218 17:20:06.573816 3956 log.go:172] (0xc000a70630) (0xc0004c68c0) Stream removed, broadcasting: 1\nI0218 17:20:06.573914 3956 log.go:172] (0xc000a70630) (0xc000a48000) Stream removed, broadcasting: 3\nI0218 17:20:06.573968 3956 log.go:172] (0xc000a70630) (0xc000bf8000) Stream removed, broadcasting: 5\nI0218 17:20:06.574158 3956 log.go:172] (0xc000a70630) Go away received\n" Feb 18 17:20:06.586: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 18 17:20:06.586: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 18 17:20:06.591: INFO: Found 1 stateful pods, waiting for 3 Feb 18 17:20:16.664: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 18 17:20:16.665: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 18 17:20:16.665: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 18 17:20:26.601: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 18 17:20:26.601: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 18 17:20:26.601: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 18 17:20:26.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7512 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 18 17:20:26.976: INFO: stderr: "I0218 17:20:26.769932 3976 log.go:172] (0xc0000ea580) (0xc0002fb540) Create stream\nI0218 17:20:26.770070 3976 log.go:172] (0xc0000ea580) (0xc0002fb540) Stream added, broadcasting: 1\nI0218 17:20:26.775801 3976 log.go:172] (0xc0000ea580) Reply frame received for 1\nI0218 17:20:26.775836 3976 log.go:172] (0xc0000ea580) (0xc0006b9c20) Create stream\nI0218 17:20:26.775847 3976 log.go:172] (0xc0000ea580) (0xc0006b9c20) Stream added, broadcasting: 3\nI0218 17:20:26.777109 3976 log.go:172] (0xc0000ea580) Reply frame received for 3\nI0218 17:20:26.777135 3976 log.go:172] (0xc0000ea580) (0xc00091e000) Create stream\nI0218 17:20:26.777145 3976 log.go:172] (0xc0000ea580) (0xc00091e000) Stream added, broadcasting: 5\nI0218 17:20:26.778373 3976 log.go:172] (0xc0000ea580) Reply frame received for 5\nI0218 17:20:26.868523 3976 log.go:172] (0xc0000ea580) Data frame received for 5\nI0218 17:20:26.868593 3976 log.go:172] (0xc00091e000) (5) Data frame handling\nI0218 17:20:26.868625 3976 log.go:172] (0xc00091e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 17:20:26.868653 3976 log.go:172] (0xc0000ea580) Data frame received for 3\nI0218 17:20:26.868668 3976 log.go:172] (0xc0006b9c20) (3) Data frame handling\nI0218 17:20:26.868684 3976 log.go:172] (0xc0006b9c20) (3) Data frame sent\nI0218 17:20:26.968308 3976 log.go:172] (0xc0000ea580) Data frame received for 1\nI0218 17:20:26.968463 3976 log.go:172] (0xc0000ea580) (0xc00091e000) Stream removed, broadcasting: 5\nI0218 17:20:26.968502 3976 log.go:172] (0xc0002fb540) (1) Data frame handling\nI0218 17:20:26.968530 3976 log.go:172] (0xc0002fb540) (1) Data frame sent\nI0218 17:20:26.968560 3976 log.go:172] (0xc0000ea580) (0xc0006b9c20) Stream removed, broadcasting: 3\nI0218 17:20:26.968588 3976 log.go:172] (0xc0000ea580) (0xc0002fb540) Stream removed, broadcasting: 1\nI0218 17:20:26.968596 3976 log.go:172] (0xc0000ea580) Go away received\nI0218 17:20:26.969766 3976 log.go:172] (0xc0000ea580) (0xc0002fb540) Stream removed, broadcasting: 1\nI0218 17:20:26.969791 3976 log.go:172] (0xc0000ea580) (0xc0006b9c20) Stream removed, broadcasting: 3\nI0218 17:20:26.969802 3976 log.go:172] (0xc0000ea580) (0xc00091e000) Stream removed, broadcasting: 5\n" Feb 18 17:20:26.977: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 18 17:20:26.977: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 18 17:20:26.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7512 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 18 17:20:27.392: INFO: stderr: "I0218 17:20:27.170284 3996 log.go:172] (0xc000658630) (0xc000664960) Create stream\nI0218 17:20:27.170391 3996 log.go:172] (0xc000658630) (0xc000664960) Stream added, broadcasting: 1\nI0218 17:20:27.173583 3996 log.go:172] (0xc000658630) Reply frame received for 1\nI0218 17:20:27.173616 3996 log.go:172] (0xc000658630) (0xc0004455e0) Create stream\nI0218 17:20:27.173629 3996 log.go:172] (0xc000658630) (0xc0004455e0) Stream added, broadcasting: 3\nI0218 17:20:27.174665 3996 log.go:172] (0xc000658630) Reply frame received for 3\nI0218 17:20:27.174685 3996 log.go:172] (0xc000658630) (0xc000445680) Create stream\nI0218 17:20:27.174692 3996 log.go:172] (0xc000658630) (0xc000445680) Stream added, broadcasting: 5\nI0218 17:20:27.175884 3996 log.go:172] (0xc000658630) Reply frame received for 5\nI0218 17:20:27.248494 3996 log.go:172] (0xc000658630) Data frame received for 5\nI0218 17:20:27.248516 3996 log.go:172] (0xc000445680) (5) Data frame handling\nI0218 17:20:27.248531 3996 log.go:172] (0xc000445680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 17:20:27.288594 3996 log.go:172] (0xc000658630) Data frame received for 3\nI0218 17:20:27.288661 3996 log.go:172] (0xc0004455e0) (3) Data frame handling\nI0218 17:20:27.288691 3996 log.go:172] (0xc0004455e0) (3) Data frame sent\nI0218 17:20:27.381850 3996 log.go:172] (0xc000658630) Data frame received for 1\nI0218 17:20:27.381880 3996 log.go:172] (0xc000664960) (1) Data frame handling\nI0218 17:20:27.381890 3996 log.go:172] (0xc000664960) (1) Data frame sent\nI0218 17:20:27.381905 3996 log.go:172] (0xc000658630) (0xc000664960) Stream removed, broadcasting: 1\nI0218 17:20:27.383119 3996 log.go:172] (0xc000658630) (0xc0004455e0) Stream removed, broadcasting: 3\nI0218 17:20:27.383166 3996 log.go:172] (0xc000658630) (0xc000445680) Stream removed, broadcasting: 5\nI0218 17:20:27.383215 3996 log.go:172] (0xc000658630) (0xc000664960) Stream removed, broadcasting: 1\nI0218 17:20:27.383236 3996 log.go:172] (0xc000658630) (0xc0004455e0) Stream removed, broadcasting: 3\nI0218 17:20:27.383245 3996 log.go:172] (0xc000658630) (0xc000445680) Stream removed, broadcasting: 5\n" Feb 18 17:20:27.392: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 18 17:20:27.392: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 18 17:20:27.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7512 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 18 17:20:27.824: INFO: stderr: "I0218 17:20:27.621757 4016 log.go:172] (0xc0009d4f20) (0xc000a361e0) Create stream\nI0218 17:20:27.622171 4016 log.go:172] (0xc0009d4f20) (0xc000a361e0) Stream added, broadcasting: 1\nI0218 17:20:27.630461 4016 log.go:172] (0xc0009d4f20) Reply frame received for 1\nI0218 17:20:27.630531 4016 log.go:172] (0xc0009d4f20) (0xc000a36280) Create stream\nI0218 17:20:27.630541 4016 log.go:172] (0xc0009d4f20) (0xc000a36280) Stream added, broadcasting: 3\nI0218 17:20:27.632169 4016 log.go:172] (0xc0009d4f20) Reply frame received for 3\nI0218 17:20:27.632196 4016 log.go:172] (0xc0009d4f20) (0xc0009fc0a0) Create stream\nI0218 17:20:27.632206 4016 log.go:172] (0xc0009d4f20) (0xc0009fc0a0) Stream added, broadcasting: 5\nI0218 17:20:27.634250 4016 log.go:172] (0xc0009d4f20) Reply frame received for 5\nI0218 17:20:27.707066 4016 log.go:172] (0xc0009d4f20) Data frame received for 5\nI0218 17:20:27.707120 4016 log.go:172] (0xc0009fc0a0) (5) Data frame handling\nI0218 17:20:27.707141 4016 log.go:172] (0xc0009fc0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 17:20:27.727609 4016 log.go:172] (0xc0009d4f20) Data frame received for 3\nI0218 17:20:27.727626 4016 log.go:172] (0xc000a36280) (3) Data frame handling\nI0218 17:20:27.727641 4016 log.go:172] (0xc000a36280) (3) Data frame sent\nI0218 17:20:27.807164 4016 log.go:172] (0xc0009d4f20) Data frame received for 1\nI0218 17:20:27.807236 4016 log.go:172] (0xc0009d4f20) (0xc000a36280) Stream removed, broadcasting: 3\nI0218 17:20:27.807305 4016 log.go:172] (0xc000a361e0) (1) Data frame handling\nI0218 17:20:27.807328 4016 log.go:172] (0xc000a361e0) (1) Data frame sent\nI0218 17:20:27.807341 4016 log.go:172] (0xc0009d4f20) (0xc000a361e0) Stream removed, broadcasting: 1\nI0218 17:20:27.812863 4016 log.go:172] (0xc0009d4f20) (0xc0009fc0a0) Stream removed, broadcasting: 5\nI0218 17:20:27.812894 4016 log.go:172] (0xc0009d4f20) Go away received\nI0218 17:20:27.812940 4016 log.go:172] (0xc0009d4f20) (0xc000a361e0) Stream removed, broadcasting: 1\nI0218 17:20:27.812954 4016 log.go:172] (0xc0009d4f20) (0xc000a36280) Stream removed, broadcasting: 3\nI0218 17:20:27.812961 4016 log.go:172] (0xc0009d4f20) (0xc0009fc0a0) Stream removed, broadcasting: 5\n" Feb 18 17:20:27.825: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 18 17:20:27.825: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 18 17:20:27.825: INFO: Waiting for statefulset status.replicas updated to 0 Feb 18 17:20:27.832: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 18 17:20:37.849: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 18 17:20:37.849: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 18 17:20:37.849: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 18 17:20:37.953: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999127s Feb 18 17:20:38.962: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.90548447s Feb 18 17:20:39.970: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.897227013s Feb 18 17:20:40.978: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.888600329s Feb 18 17:20:41.984: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.881150126s Feb 18 17:20:43.122: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.875198311s Feb 18 17:20:44.135: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.736510199s Feb 18 17:20:45.160: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.724010574s Feb 18 17:20:46.173: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.698510655s Feb 18 17:20:47.181: INFO: Verifying statefulset ss doesn't scale past 3 for another 686.309799ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7512 Feb 18 17:20:48.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7512 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 17:20:48.619: INFO: stderr: "I0218 17:20:48.430789 4036 log.go:172] (0xc000ce0e70) (0xc000cd8280) Create stream\nI0218 17:20:48.431040 4036 log.go:172] (0xc000ce0e70) (0xc000cd8280) Stream added, broadcasting: 1\nI0218 17:20:48.436107 4036 log.go:172] (0xc000ce0e70) Reply frame received for 1\nI0218 17:20:48.436186 4036 log.go:172] (0xc000ce0e70) (0xc000b5a280) Create stream\nI0218 17:20:48.436246 4036 log.go:172] (0xc000ce0e70) (0xc000b5a280) Stream added, broadcasting: 3\nI0218 17:20:48.438340 4036 log.go:172] (0xc000ce0e70) Reply frame received for 3\nI0218 17:20:48.438424 4036 log.go:172] (0xc000ce0e70) (0xc000715ea0) Create stream\nI0218 17:20:48.438437 4036 log.go:172] (0xc000ce0e70) (0xc000715ea0) Stream added, broadcasting: 5\nI0218 17:20:48.440130 4036 log.go:172] (0xc000ce0e70) Reply frame received for 5\nI0218 17:20:48.519526 4036 log.go:172] (0xc000ce0e70) Data frame received for 3\nI0218 17:20:48.519684 4036 log.go:172] (0xc000b5a280) (3) Data frame handling\nI0218 17:20:48.519705 4036 log.go:172] (0xc000b5a280) (3) Data frame sent\nI0218 17:20:48.519828 4036 log.go:172] (0xc000ce0e70) Data frame received for 5\nI0218 17:20:48.519852 4036 log.go:172] (0xc000715ea0) (5) Data frame handling\nI0218 17:20:48.519882 4036 log.go:172] (0xc000715ea0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 17:20:48.606583 4036 log.go:172] (0xc000ce0e70) Data frame received for 1\nI0218 17:20:48.606849 4036 log.go:172] (0xc000ce0e70) (0xc000715ea0) Stream removed, broadcasting: 5\nI0218 17:20:48.606905 4036 log.go:172] (0xc000cd8280) (1) Data frame handling\nI0218 17:20:48.606917 4036 log.go:172] (0xc000cd8280) (1) Data frame sent\nI0218 17:20:48.606942 4036 log.go:172] (0xc000ce0e70) (0xc000b5a280) Stream removed, broadcasting: 3\nI0218 17:20:48.606961 4036 log.go:172] (0xc000ce0e70) (0xc000cd8280) Stream removed, broadcasting: 1\nI0218 17:20:48.607528 4036 log.go:172] (0xc000ce0e70) (0xc000cd8280) Stream removed, broadcasting: 1\nI0218 17:20:48.607539 4036 log.go:172] (0xc000ce0e70) (0xc000b5a280) Stream removed, broadcasting: 3\nI0218 17:20:48.607546 4036 log.go:172] (0xc000ce0e70) (0xc000715ea0) Stream removed, broadcasting: 5\n" Feb 18 17:20:48.619: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 18 17:20:48.619: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 18 17:20:48.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7512 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 17:20:48.999: INFO: stderr: "I0218 17:20:48.802980 4056 log.go:172] (0xc0005f0420) (0xc000483f40) Create stream\nI0218 17:20:48.803134 4056 log.go:172] (0xc0005f0420) (0xc000483f40) Stream added, broadcasting: 1\nI0218 17:20:48.806722 4056 log.go:172] (0xc0005f0420) Reply frame received for 1\nI0218 17:20:48.806747 4056 log.go:172] (0xc0005f0420) (0xc000637a40) Create stream\nI0218 17:20:48.806753 4056 log.go:172] (0xc0005f0420) (0xc000637a40) Stream added, broadcasting: 3\nI0218 17:20:48.807700 4056 log.go:172] (0xc0005f0420) Reply frame received for 3\nI0218 17:20:48.807718 4056 log.go:172] (0xc0005f0420) (0xc0007680a0) Create stream\nI0218 17:20:48.807723 4056 log.go:172] (0xc0005f0420) (0xc0007680a0) Stream added, broadcasting: 5\nI0218 17:20:48.814582 4056 log.go:172] (0xc0005f0420) Reply frame received for 5\nI0218 17:20:48.901480 4056 log.go:172] (0xc0005f0420) Data frame received for 5\nI0218 17:20:48.901621 4056 log.go:172] (0xc0007680a0) (5) Data frame handling\nI0218 17:20:48.901640 4056 log.go:172] (0xc0007680a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 17:20:48.901666 4056 log.go:172] (0xc0005f0420) Data frame received for 3\nI0218 17:20:48.901672 4056 log.go:172] (0xc000637a40) (3) Data frame handling\nI0218 17:20:48.901687 4056 log.go:172] (0xc000637a40) (3) Data frame sent\nI0218 17:20:48.986324 4056 log.go:172] (0xc0005f0420) Data frame received for 1\nI0218 17:20:48.986365 4056 log.go:172] (0xc000483f40) (1) Data frame handling\nI0218 17:20:48.986385 4056 log.go:172] (0xc000483f40) (1) Data frame sent\nI0218 17:20:48.987732 4056 log.go:172] (0xc0005f0420) (0xc000483f40) Stream removed, broadcasting: 1\nI0218 17:20:48.989118 4056 log.go:172] (0xc0005f0420) (0xc000637a40) Stream removed, broadcasting: 3\nI0218 17:20:48.990730 4056 log.go:172] (0xc0005f0420) (0xc0007680a0) Stream removed, broadcasting: 5\nI0218 17:20:48.990764 4056 log.go:172] (0xc0005f0420) (0xc000483f40) Stream removed, broadcasting: 1\nI0218 17:20:48.990770 4056 log.go:172] (0xc0005f0420) (0xc000637a40) Stream removed, broadcasting: 3\nI0218 17:20:48.990774 4056 log.go:172] (0xc0005f0420) (0xc0007680a0) Stream removed, broadcasting: 5\n" Feb 18 17:20:48.999: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 18 17:20:48.999: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 18 17:20:48.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7512 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 18 17:20:49.267: INFO: stderr: "I0218 17:20:49.119884 4077 log.go:172] (0xc0000f5ad0) (0xc0005bdea0) Create stream\nI0218 17:20:49.120002 4077 log.go:172] (0xc0000f5ad0) (0xc0005bdea0) Stream added, broadcasting: 1\nI0218 17:20:49.122452 4077 log.go:172] (0xc0000f5ad0) Reply frame received for 1\nI0218 17:20:49.122478 4077 log.go:172] (0xc0000f5ad0) (0xc0005bdf40) Create stream\nI0218 17:20:49.122486 4077 log.go:172] (0xc0000f5ad0) (0xc0005bdf40) Stream added, broadcasting: 3\nI0218 17:20:49.123533 4077 log.go:172] (0xc0000f5ad0) Reply frame received for 3\nI0218 17:20:49.123561 4077 log.go:172] (0xc0000f5ad0) (0xc000770000) Create stream\nI0218 17:20:49.123572 4077 log.go:172] (0xc0000f5ad0) (0xc000770000) Stream added, broadcasting: 5\nI0218 17:20:49.125706 4077 log.go:172] (0xc0000f5ad0) Reply frame received for 5\nI0218 17:20:49.184749 4077 log.go:172] (0xc0000f5ad0) Data frame received for 3\nI0218 17:20:49.184781 4077 log.go:172] (0xc0005bdf40) (3) Data frame handling\nI0218 17:20:49.184794 4077 log.go:172] (0xc0005bdf40) (3) Data frame sent\nI0218 17:20:49.187258 4077 log.go:172] (0xc0000f5ad0) Data frame received for 5\nI0218 17:20:49.187281 4077 log.go:172] (0xc000770000) (5) Data frame handling\nI0218 17:20:49.187302 4077 log.go:172] (0xc000770000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 17:20:49.256232 4077 log.go:172] (0xc0000f5ad0) Data frame received for 1\nI0218 17:20:49.256311 4077 log.go:172] (0xc0005bdea0) (1) Data frame handling\nI0218 17:20:49.256332 4077 log.go:172] (0xc0005bdea0) (1) Data frame sent\nI0218 17:20:49.256435 4077 log.go:172] (0xc0000f5ad0) (0xc000770000) Stream removed, broadcasting: 5\nI0218 17:20:49.256534 4077 log.go:172] (0xc0000f5ad0) (0xc0005bdea0) Stream removed, broadcasting: 1\nI0218 17:20:49.256671 4077 log.go:172] (0xc0000f5ad0) (0xc0005bdf40) Stream removed, broadcasting: 3\nI0218 17:20:49.257092 4077 log.go:172] (0xc0000f5ad0) (0xc0005bdea0) Stream removed, broadcasting: 1\nI0218 17:20:49.257151 4077 log.go:172] (0xc0000f5ad0) (0xc0005bdf40) Stream removed, broadcasting: 3\nI0218 17:20:49.257181 4077 log.go:172] (0xc0000f5ad0) (0xc000770000) Stream removed, broadcasting: 5\n" Feb 18 17:20:49.267: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 18 17:20:49.267: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 18 17:20:49.267: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 18 17:21:09.289: INFO: Deleting all statefulset in ns statefulset-7512 Feb 18 17:21:09.293: INFO: Scaling statefulset ss to 0 Feb 18 17:21:09.310: INFO: Waiting for statefulset status.replicas updated to 0 Feb 18 17:21:09.317: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 18 17:21:09.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7512" for this suite. • [SLOW TEST:93.953 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":280,"completed":197,"skipped":3175,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 18 17:21:09.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 18 17:21:09.665: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 109.470907ms)
Feb 18 17:21:09.671: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.049847ms)
Feb 18 17:21:09.677: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.097719ms)
Feb 18 17:21:09.684: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.793297ms)
Feb 18 17:21:09.692: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.251214ms)
Feb 18 17:21:09.702: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.371813ms)
Feb 18 17:21:09.709: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.152015ms)
Feb 18 17:21:09.718: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.977321ms)
Feb 18 17:21:09.724: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.545201ms)
Feb 18 17:21:09.730: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.371107ms)
Feb 18 17:21:09.736: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.320554ms)
Feb 18 17:21:09.777: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 40.906558ms)
Feb 18 17:21:09.792: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.670503ms)
Feb 18 17:21:09.801: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.43833ms)
Feb 18 17:21:09.811: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.457481ms)
Feb 18 17:21:09.818: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.443983ms)
Feb 18 17:21:09.825: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.482947ms)
Feb 18 17:21:09.830: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.532796ms)
Feb 18 17:21:09.836: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.222244ms)
Feb 18 17:21:09.843: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.117424ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:21:09.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5259" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":280,"completed":198,"skipped":3175,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:21:09.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 18 17:21:18.512: INFO: Successfully updated pod "annotationupdate36f64db3-08a3-4a4e-abb8-d0eea2e95d97"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:21:20.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7527" for this suite.

• [SLOW TEST:10.749 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":199,"skipped":3209,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:21:20.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 18 17:21:20.828: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3943 /api/v1/namespaces/watch-3943/configmaps/e2e-watch-test-resource-version 53505c3b-0896-4ee9-a821-2ee181d0aa2c 9222965 0 2020-02-18 17:21:20 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 18 17:21:20.828: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3943 /api/v1/namespaces/watch-3943/configmaps/e2e-watch-test-resource-version 53505c3b-0896-4ee9-a821-2ee181d0aa2c 9222966 0 2020-02-18 17:21:20 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:21:20.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3943" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":280,"completed":200,"skipped":3257,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:21:20.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7640.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7640.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7640.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7640.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7640.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7640.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7640.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7640.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7640.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7640.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 17:21:35.077: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:35.081: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:35.084: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:35.086: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:35.114: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:35.175: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:35.184: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:35.193: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:35.203: INFO: Lookups using dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7640.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7640.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local jessie_udp@dns-test-service-2.dns-7640.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7640.svc.cluster.local]

Feb 18 17:21:40.212: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:40.215: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:40.219: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:40.221: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:40.231: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:40.233: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:40.237: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:40.239: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:40.247: INFO: Lookups using dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7640.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7640.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local jessie_udp@dns-test-service-2.dns-7640.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7640.svc.cluster.local]

Feb 18 17:21:45.222: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:45.230: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:45.234: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:45.237: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:45.248: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:45.251: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:45.255: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:45.259: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:45.278: INFO: Lookups using dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7640.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7640.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local jessie_udp@dns-test-service-2.dns-7640.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7640.svc.cluster.local]

Feb 18 17:21:50.237: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:50.242: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:50.247: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:50.253: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:50.266: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:50.270: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:50.275: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:50.299: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:50.311: INFO: Lookups using dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7640.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7640.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local jessie_udp@dns-test-service-2.dns-7640.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7640.svc.cluster.local]

Feb 18 17:21:55.228: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:55.233: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:55.237: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:55.241: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:55.255: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:55.263: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:55.266: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:55.272: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:21:55.283: INFO: Lookups using dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7640.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7640.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local jessie_udp@dns-test-service-2.dns-7640.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7640.svc.cluster.local]

Feb 18 17:22:00.212: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:22:00.216: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:22:00.221: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:22:00.225: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:22:00.240: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:22:00.246: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:22:00.251: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:22:00.259: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7640.svc.cluster.local from pod dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e: the server could not find the requested resource (get pods dns-test-48f5e95a-ae1d-49ec-b558-28242317489e)
Feb 18 17:22:00.271: INFO: Lookups using dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7640.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7640.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7640.svc.cluster.local jessie_udp@dns-test-service-2.dns-7640.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7640.svc.cluster.local]

Feb 18 17:22:05.282: INFO: DNS probes using dns-7640/dns-test-48f5e95a-ae1d-49ec-b558-28242317489e succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:22:05.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7640" for this suite.

• [SLOW TEST:44.844 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":280,"completed":201,"skipped":3266,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:22:05.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 18 17:22:05.959: INFO: Waiting up to 5m0s for pod "pod-6f58561c-221d-43fb-ab9f-774f21a1d454" in namespace "emptydir-1821" to be "success or failure"
Feb 18 17:22:06.097: INFO: Pod "pod-6f58561c-221d-43fb-ab9f-774f21a1d454": Phase="Pending", Reason="", readiness=false. Elapsed: 138.255423ms
Feb 18 17:22:08.106: INFO: Pod "pod-6f58561c-221d-43fb-ab9f-774f21a1d454": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147396121s
Feb 18 17:22:10.113: INFO: Pod "pod-6f58561c-221d-43fb-ab9f-774f21a1d454": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154105382s
Feb 18 17:22:12.119: INFO: Pod "pod-6f58561c-221d-43fb-ab9f-774f21a1d454": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160275269s
Feb 18 17:22:14.128: INFO: Pod "pod-6f58561c-221d-43fb-ab9f-774f21a1d454": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168671378s
Feb 18 17:22:16.138: INFO: Pod "pod-6f58561c-221d-43fb-ab9f-774f21a1d454": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.178622654s
STEP: Saw pod success
Feb 18 17:22:16.138: INFO: Pod "pod-6f58561c-221d-43fb-ab9f-774f21a1d454" satisfied condition "success or failure"
Feb 18 17:22:16.142: INFO: Trying to get logs from node jerma-node pod pod-6f58561c-221d-43fb-ab9f-774f21a1d454 container test-container: 
STEP: delete the pod
Feb 18 17:22:16.189: INFO: Waiting for pod pod-6f58561c-221d-43fb-ab9f-774f21a1d454 to disappear
Feb 18 17:22:16.210: INFO: Pod pod-6f58561c-221d-43fb-ab9f-774f21a1d454 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:22:16.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1821" for this suite.

• [SLOW TEST:10.535 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":202,"skipped":3269,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:22:16.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Feb 18 17:22:29.056: INFO: Successfully updated pod "adopt-release-sshzq"
STEP: Checking that the Job readopts the Pod
Feb 18 17:22:29.057: INFO: Waiting up to 15m0s for pod "adopt-release-sshzq" in namespace "job-3303" to be "adopted"
Feb 18 17:22:29.141: INFO: Pod "adopt-release-sshzq": Phase="Running", Reason="", readiness=true. Elapsed: 84.670079ms
Feb 18 17:22:31.147: INFO: Pod "adopt-release-sshzq": Phase="Running", Reason="", readiness=true. Elapsed: 2.090131949s
Feb 18 17:22:31.147: INFO: Pod "adopt-release-sshzq" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Feb 18 17:22:31.667: INFO: Successfully updated pod "adopt-release-sshzq"
STEP: Checking that the Job releases the Pod
Feb 18 17:22:31.667: INFO: Waiting up to 15m0s for pod "adopt-release-sshzq" in namespace "job-3303" to be "released"
Feb 18 17:22:31.682: INFO: Pod "adopt-release-sshzq": Phase="Running", Reason="", readiness=true. Elapsed: 15.114664ms
Feb 18 17:22:33.708: INFO: Pod "adopt-release-sshzq": Phase="Running", Reason="", readiness=true. Elapsed: 2.041064623s
Feb 18 17:22:33.709: INFO: Pod "adopt-release-sshzq" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:22:33.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3303" for this suite.

• [SLOW TEST:17.532 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":280,"completed":203,"skipped":3273,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:22:33.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:22:50.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4007" for this suite.

• [SLOW TEST:16.449 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":280,"completed":204,"skipped":3273,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:22:50.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 17:22:51.021: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 17:22:53.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643371, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643371, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643371, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643371, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:22:55.041: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643371, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643371, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643371, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643371, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:22:57.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643371, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643371, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643371, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643371, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 17:23:00.116: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 18 17:23:00.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9608-crds.webhook.example.com via the AdmissionRegistration API
Feb 18 17:23:00.448: INFO: Waiting for webhook configuration to be ready...
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:23:01.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8477" for this suite.
STEP: Destroying namespace "webhook-8477-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:11.290 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":280,"completed":205,"skipped":3291,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:23:01.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Feb 18 17:23:01.550: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:23:24.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5190" for this suite.

• [SLOW TEST:22.877 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":280,"completed":206,"skipped":3304,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:23:24.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 18 17:23:24.541: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5eeb3c9f-7d50-49b9-b720-2f828912ba00" in namespace "projected-7993" to be "success or failure"
Feb 18 17:23:24.579: INFO: Pod "downwardapi-volume-5eeb3c9f-7d50-49b9-b720-2f828912ba00": Phase="Pending", Reason="", readiness=false. Elapsed: 38.445269ms
Feb 18 17:23:26.589: INFO: Pod "downwardapi-volume-5eeb3c9f-7d50-49b9-b720-2f828912ba00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047862006s
Feb 18 17:23:28.601: INFO: Pod "downwardapi-volume-5eeb3c9f-7d50-49b9-b720-2f828912ba00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06040182s
Feb 18 17:23:30.622: INFO: Pod "downwardapi-volume-5eeb3c9f-7d50-49b9-b720-2f828912ba00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080948804s
Feb 18 17:23:32.628: INFO: Pod "downwardapi-volume-5eeb3c9f-7d50-49b9-b720-2f828912ba00": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08679048s
Feb 18 17:23:34.635: INFO: Pod "downwardapi-volume-5eeb3c9f-7d50-49b9-b720-2f828912ba00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094379382s
STEP: Saw pod success
Feb 18 17:23:34.636: INFO: Pod "downwardapi-volume-5eeb3c9f-7d50-49b9-b720-2f828912ba00" satisfied condition "success or failure"
Feb 18 17:23:34.638: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5eeb3c9f-7d50-49b9-b720-2f828912ba00 container client-container: 
STEP: delete the pod
Feb 18 17:23:34.680: INFO: Waiting for pod downwardapi-volume-5eeb3c9f-7d50-49b9-b720-2f828912ba00 to disappear
Feb 18 17:23:34.683: INFO: Pod downwardapi-volume-5eeb3c9f-7d50-49b9-b720-2f828912ba00 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:23:34.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7993" for this suite.

• [SLOW TEST:10.317 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":207,"skipped":3313,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:23:34.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 18 17:23:35.052: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 18 17:23:35.125: INFO: Waiting for terminating namespaces to be deleted...
Feb 18 17:23:35.131: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 18 17:23:35.163: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 18 17:23:35.164: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 18 17:23:35.164: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 18 17:23:35.164: INFO: 	Container weave ready: true, restart count 1
Feb 18 17:23:35.164: INFO: 	Container weave-npc ready: true, restart count 0
Feb 18 17:23:35.164: INFO: pod-init-68377cab-bebf-445d-ab4a-4499f61ed969 from init-container-5190 started at 2020-02-18 17:23:01 +0000 UTC (1 container statuses recorded)
Feb 18 17:23:35.164: INFO: 	Container run1 ready: false, restart count 0
Feb 18 17:23:35.164: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 18 17:23:35.240: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 18 17:23:35.240: INFO: 	Container coredns ready: true, restart count 0
Feb 18 17:23:35.240: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 18 17:23:35.240: INFO: 	Container coredns ready: true, restart count 0
Feb 18 17:23:35.240: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 18 17:23:35.240: INFO: 	Container kube-controller-manager ready: true, restart count 12
Feb 18 17:23:35.240: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 18 17:23:35.240: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 18 17:23:35.241: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 18 17:23:35.241: INFO: 	Container weave ready: true, restart count 0
Feb 18 17:23:35.241: INFO: 	Container weave-npc ready: true, restart count 0
Feb 18 17:23:35.241: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 18 17:23:35.241: INFO: 	Container kube-scheduler ready: true, restart count 16
Feb 18 17:23:35.241: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 18 17:23:35.241: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 18 17:23:35.241: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 18 17:23:35.241: INFO: 	Container etcd ready: true, restart count 1
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-aa154d8f-03c2-4cb8-b975-82fa693694f0 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-aa154d8f-03c2-4cb8-b975-82fa693694f0 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-aa154d8f-03c2-4cb8-b975-82fa693694f0
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:24:09.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2355" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:35.063 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":280,"completed":208,"skipped":3369,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:24:09.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-60cb7f78-f4f6-4a82-9182-54fe40718b96
STEP: Creating configMap with name cm-test-opt-upd-74fa7692-7d4e-4526-8d86-ed36279f8489
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-60cb7f78-f4f6-4a82-9182-54fe40718b96
STEP: Updating configmap cm-test-opt-upd-74fa7692-7d4e-4526-8d86-ed36279f8489
STEP: Creating configMap with name cm-test-opt-create-be7684eb-867c-47b1-a9d2-17e4af3aaf6b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:24:26.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4625" for this suite.

• [SLOW TEST:16.913 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":209,"skipped":3378,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:24:26.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-4b5b664f-6829-4d05-beaa-2947cc479c45
STEP: Creating a pod to test consume configMaps
Feb 18 17:24:26.856: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9b843909-de5a-4160-9ddd-0ca242cb8b87" in namespace "projected-5263" to be "success or failure"
Feb 18 17:24:26.878: INFO: Pod "pod-projected-configmaps-9b843909-de5a-4160-9ddd-0ca242cb8b87": Phase="Pending", Reason="", readiness=false. Elapsed: 22.499696ms
Feb 18 17:24:28.885: INFO: Pod "pod-projected-configmaps-9b843909-de5a-4160-9ddd-0ca242cb8b87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029247272s
Feb 18 17:24:31.936: INFO: Pod "pod-projected-configmaps-9b843909-de5a-4160-9ddd-0ca242cb8b87": Phase="Pending", Reason="", readiness=false. Elapsed: 5.079658486s
Feb 18 17:24:33.970: INFO: Pod "pod-projected-configmaps-9b843909-de5a-4160-9ddd-0ca242cb8b87": Phase="Pending", Reason="", readiness=false. Elapsed: 7.114082943s
Feb 18 17:24:35.984: INFO: Pod "pod-projected-configmaps-9b843909-de5a-4160-9ddd-0ca242cb8b87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.127986525s
STEP: Saw pod success
Feb 18 17:24:35.984: INFO: Pod "pod-projected-configmaps-9b843909-de5a-4160-9ddd-0ca242cb8b87" satisfied condition "success or failure"
Feb 18 17:24:35.989: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-projected-configmaps-9b843909-de5a-4160-9ddd-0ca242cb8b87 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 18 17:24:36.031: INFO: Waiting for pod pod-projected-configmaps-9b843909-de5a-4160-9ddd-0ca242cb8b87 to disappear
Feb 18 17:24:36.117: INFO: Pod pod-projected-configmaps-9b843909-de5a-4160-9ddd-0ca242cb8b87 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:24:36.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5263" for this suite.

• [SLOW TEST:9.454 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":210,"skipped":3388,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:24:36.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 18 17:24:36.323: INFO: Waiting up to 5m0s for pod "pod-89f70c9e-d22e-4ceb-82aa-5326b59168ca" in namespace "emptydir-8267" to be "success or failure"
Feb 18 17:24:36.331: INFO: Pod "pod-89f70c9e-d22e-4ceb-82aa-5326b59168ca": Phase="Pending", Reason="", readiness=false. Elapsed: 7.55497ms
Feb 18 17:24:38.338: INFO: Pod "pod-89f70c9e-d22e-4ceb-82aa-5326b59168ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015397413s
Feb 18 17:24:40.345: INFO: Pod "pod-89f70c9e-d22e-4ceb-82aa-5326b59168ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022255797s
Feb 18 17:24:42.777: INFO: Pod "pod-89f70c9e-d22e-4ceb-82aa-5326b59168ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.453904407s
Feb 18 17:24:44.786: INFO: Pod "pod-89f70c9e-d22e-4ceb-82aa-5326b59168ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.462541243s
Feb 18 17:24:46.793: INFO: Pod "pod-89f70c9e-d22e-4ceb-82aa-5326b59168ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.470252678s
STEP: Saw pod success
Feb 18 17:24:46.793: INFO: Pod "pod-89f70c9e-d22e-4ceb-82aa-5326b59168ca" satisfied condition "success or failure"
Feb 18 17:24:46.797: INFO: Trying to get logs from node jerma-node pod pod-89f70c9e-d22e-4ceb-82aa-5326b59168ca container test-container: 
STEP: delete the pod
Feb 18 17:24:46.935: INFO: Waiting for pod pod-89f70c9e-d22e-4ceb-82aa-5326b59168ca to disappear
Feb 18 17:24:46.950: INFO: Pod pod-89f70c9e-d22e-4ceb-82aa-5326b59168ca no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:24:46.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8267" for this suite.

• [SLOW TEST:10.835 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":211,"skipped":3403,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:24:46.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 18 17:24:47.120: INFO: Waiting up to 5m0s for pod "pod-55eb864e-060b-42c6-8fa1-476ea890b5bb" in namespace "emptydir-8179" to be "success or failure"
Feb 18 17:24:47.174: INFO: Pod "pod-55eb864e-060b-42c6-8fa1-476ea890b5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 54.048321ms
Feb 18 17:24:49.179: INFO: Pod "pod-55eb864e-060b-42c6-8fa1-476ea890b5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059614556s
Feb 18 17:24:51.186: INFO: Pod "pod-55eb864e-060b-42c6-8fa1-476ea890b5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066258581s
Feb 18 17:24:53.632: INFO: Pod "pod-55eb864e-060b-42c6-8fa1-476ea890b5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.512263891s
Feb 18 17:24:55.722: INFO: Pod "pod-55eb864e-060b-42c6-8fa1-476ea890b5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.601902491s
Feb 18 17:24:57.728: INFO: Pod "pod-55eb864e-060b-42c6-8fa1-476ea890b5bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.607812759s
STEP: Saw pod success
Feb 18 17:24:57.728: INFO: Pod "pod-55eb864e-060b-42c6-8fa1-476ea890b5bb" satisfied condition "success or failure"
Feb 18 17:24:57.732: INFO: Trying to get logs from node jerma-node pod pod-55eb864e-060b-42c6-8fa1-476ea890b5bb container test-container: 
STEP: delete the pod
Feb 18 17:24:57.829: INFO: Waiting for pod pod-55eb864e-060b-42c6-8fa1-476ea890b5bb to disappear
Feb 18 17:24:57.837: INFO: Pod pod-55eb864e-060b-42c6-8fa1-476ea890b5bb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:24:57.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8179" for this suite.

• [SLOW TEST:10.933 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":212,"skipped":3478,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:24:57.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service endpoint-test2 in namespace services-7883
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7883 to expose endpoints map[]
Feb 18 17:24:58.020: INFO: Get endpoints failed (5.468436ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb 18 17:24:59.026: INFO: successfully validated that service endpoint-test2 in namespace services-7883 exposes endpoints map[] (1.011593947s elapsed)
STEP: Creating pod pod1 in namespace services-7883
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7883 to expose endpoints map[pod1:[80]]
Feb 18 17:25:03.137: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.100408064s elapsed, will retry)
Feb 18 17:25:07.177: INFO: successfully validated that service endpoint-test2 in namespace services-7883 exposes endpoints map[pod1:[80]] (8.140405056s elapsed)
STEP: Creating pod pod2 in namespace services-7883
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7883 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 18 17:25:11.648: INFO: Unexpected endpoints: found map[4ca42b9e-3720-4c01-9eee-e30d23fbab28:[80]], expected map[pod1:[80] pod2:[80]] (4.46728906s elapsed, will retry)
Feb 18 17:25:13.714: INFO: successfully validated that service endpoint-test2 in namespace services-7883 exposes endpoints map[pod1:[80] pod2:[80]] (6.532499239s elapsed)
STEP: Deleting pod pod1 in namespace services-7883
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7883 to expose endpoints map[pod2:[80]]
Feb 18 17:25:16.391: INFO: successfully validated that service endpoint-test2 in namespace services-7883 exposes endpoints map[pod2:[80]] (2.668438624s elapsed)
STEP: Deleting pod pod2 in namespace services-7883
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7883 to expose endpoints map[]
Feb 18 17:25:16.469: INFO: successfully validated that service endpoint-test2 in namespace services-7883 exposes endpoints map[] (68.244034ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:25:16.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7883" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:18.645 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":280,"completed":213,"skipped":3478,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:25:16.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-1610271f-e789-4dd4-a19f-b3bd79ef845f
STEP: Creating a pod to test consume configMaps
Feb 18 17:25:16.769: INFO: Waiting up to 5m0s for pod "pod-configmaps-d3c47f1e-5c40-4ef5-9cad-cd5d3ea534c8" in namespace "configmap-7528" to be "success or failure"
Feb 18 17:25:16.782: INFO: Pod "pod-configmaps-d3c47f1e-5c40-4ef5-9cad-cd5d3ea534c8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.097637ms
Feb 18 17:25:19.989: INFO: Pod "pod-configmaps-d3c47f1e-5c40-4ef5-9cad-cd5d3ea534c8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.219696117s
Feb 18 17:25:22.131: INFO: Pod "pod-configmaps-d3c47f1e-5c40-4ef5-9cad-cd5d3ea534c8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.361948417s
Feb 18 17:25:24.380: INFO: Pod "pod-configmaps-d3c47f1e-5c40-4ef5-9cad-cd5d3ea534c8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.61088162s
Feb 18 17:25:26.386: INFO: Pod "pod-configmaps-d3c47f1e-5c40-4ef5-9cad-cd5d3ea534c8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.616360165s
Feb 18 17:25:28.392: INFO: Pod "pod-configmaps-d3c47f1e-5c40-4ef5-9cad-cd5d3ea534c8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.6230468s
Feb 18 17:25:30.399: INFO: Pod "pod-configmaps-d3c47f1e-5c40-4ef5-9cad-cd5d3ea534c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.62972415s
STEP: Saw pod success
Feb 18 17:25:30.399: INFO: Pod "pod-configmaps-d3c47f1e-5c40-4ef5-9cad-cd5d3ea534c8" satisfied condition "success or failure"
Feb 18 17:25:30.403: INFO: Trying to get logs from node jerma-node pod pod-configmaps-d3c47f1e-5c40-4ef5-9cad-cd5d3ea534c8 container configmap-volume-test: 
STEP: delete the pod
Feb 18 17:25:30.526: INFO: Waiting for pod pod-configmaps-d3c47f1e-5c40-4ef5-9cad-cd5d3ea534c8 to disappear
Feb 18 17:25:30.539: INFO: Pod pod-configmaps-d3c47f1e-5c40-4ef5-9cad-cd5d3ea534c8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:25:30.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7528" for this suite.

• [SLOW TEST:14.004 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":214,"skipped":3521,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:25:30.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 18 17:25:30.665: INFO: Waiting up to 5m0s for pod "pod-772e233e-68b2-4dbb-bfd2-26802b7117b7" in namespace "emptydir-5886" to be "success or failure"
Feb 18 17:25:30.681: INFO: Pod "pod-772e233e-68b2-4dbb-bfd2-26802b7117b7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.1968ms
Feb 18 17:25:32.688: INFO: Pod "pod-772e233e-68b2-4dbb-bfd2-26802b7117b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022960392s
Feb 18 17:25:34.695: INFO: Pod "pod-772e233e-68b2-4dbb-bfd2-26802b7117b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029522595s
Feb 18 17:25:36.701: INFO: Pod "pod-772e233e-68b2-4dbb-bfd2-26802b7117b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036295383s
Feb 18 17:25:38.713: INFO: Pod "pod-772e233e-68b2-4dbb-bfd2-26802b7117b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048238526s
STEP: Saw pod success
Feb 18 17:25:38.714: INFO: Pod "pod-772e233e-68b2-4dbb-bfd2-26802b7117b7" satisfied condition "success or failure"
Feb 18 17:25:38.719: INFO: Trying to get logs from node jerma-node pod pod-772e233e-68b2-4dbb-bfd2-26802b7117b7 container test-container: 
STEP: delete the pod
Feb 18 17:25:38.777: INFO: Waiting for pod pod-772e233e-68b2-4dbb-bfd2-26802b7117b7 to disappear
Feb 18 17:25:38.783: INFO: Pod pod-772e233e-68b2-4dbb-bfd2-26802b7117b7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:25:38.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5886" for this suite.

• [SLOW TEST:8.241 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":215,"skipped":3552,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:25:38.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-38ecf93e-2a68-4f14-b143-fd570c534a89
STEP: Creating configMap with name cm-test-opt-upd-89d89001-b169-456c-ac79-539c9cdc4adb
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-38ecf93e-2a68-4f14-b143-fd570c534a89
STEP: Updating configmap cm-test-opt-upd-89d89001-b169-456c-ac79-539c9cdc4adb
STEP: Creating configMap with name cm-test-opt-create-44776feb-d73e-42fa-9181-877481db4536
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:26:58.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3197" for this suite.

• [SLOW TEST:79.415 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":216,"skipped":3555,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:26:58.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 18 17:27:08.964: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:27:09.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5357" for this suite.

• [SLOW TEST:11.135 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":217,"skipped":3561,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:27:09.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:27:16.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1901" for this suite.

• [SLOW TEST:7.191 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":280,"completed":218,"skipped":3606,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:27:16.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 17:27:17.344: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 17:27:19.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643637, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643637, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643637, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643637, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:27:21.372: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643637, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643637, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643637, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643637, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:27:23.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643637, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643637, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643637, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643637, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 17:27:26.413: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:27:27.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4001" for this suite.
STEP: Destroying namespace "webhook-4001-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.739 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":280,"completed":219,"skipped":3624,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:27:27.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1694
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 18 17:27:27.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2705'
Feb 18 17:27:30.712: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 18 17:27:30.712: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Feb 18 17:27:30.809: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 18 17:27:30.818: INFO: scanned /root for discovery docs: 
Feb 18 17:27:30.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2705'
Feb 18 17:27:51.924: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 18 17:27:51.924: INFO: stdout: "Created e2e-test-httpd-rc-55fcb1a0cc4bf2cdab68ee0f38e7f013\nScaling up e2e-test-httpd-rc-55fcb1a0cc4bf2cdab68ee0f38e7f013 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-55fcb1a0cc4bf2cdab68ee0f38e7f013 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-55fcb1a0cc4bf2cdab68ee0f38e7f013 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Feb 18 17:27:51.924: INFO: stdout: "Created e2e-test-httpd-rc-55fcb1a0cc4bf2cdab68ee0f38e7f013\nScaling up e2e-test-httpd-rc-55fcb1a0cc4bf2cdab68ee0f38e7f013 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-55fcb1a0cc4bf2cdab68ee0f38e7f013 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-55fcb1a0cc4bf2cdab68ee0f38e7f013 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Feb 18 17:27:51.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2705'
Feb 18 17:27:52.094: INFO: stderr: ""
Feb 18 17:27:52.094: INFO: stdout: "e2e-test-httpd-rc-55fcb1a0cc4bf2cdab68ee0f38e7f013-kz4tq e2e-test-httpd-rc-j4rn4 "
STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2
Feb 18 17:27:57.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2705'
Feb 18 17:27:57.218: INFO: stderr: ""
Feb 18 17:27:57.219: INFO: stdout: "e2e-test-httpd-rc-55fcb1a0cc4bf2cdab68ee0f38e7f013-kz4tq "
Feb 18 17:27:57.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-55fcb1a0cc4bf2cdab68ee0f38e7f013-kz4tq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2705'
Feb 18 17:27:57.352: INFO: stderr: ""
Feb 18 17:27:57.352: INFO: stdout: "true"
Feb 18 17:27:57.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-55fcb1a0cc4bf2cdab68ee0f38e7f013-kz4tq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2705'
Feb 18 17:27:57.505: INFO: stderr: ""
Feb 18 17:27:57.505: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Feb 18 17:27:57.505: INFO: e2e-test-httpd-rc-55fcb1a0cc4bf2cdab68ee0f38e7f013-kz4tq is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700
Feb 18 17:27:57.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2705'
Feb 18 17:27:57.661: INFO: stderr: ""
Feb 18 17:27:57.661: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:27:57.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2705" for this suite.

• [SLOW TEST:30.391 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1689
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":280,"completed":220,"skipped":3664,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:27:57.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 17:27:58.105: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 17:28:00.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:28:02.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:28:04.130: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:28:06.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643678, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 17:28:09.184: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:28:09.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4393" for this suite.
STEP: Destroying namespace "webhook-4393-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:11.847 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":280,"completed":221,"skipped":3674,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:28:09.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:28:17.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6242" for this suite.

• [SLOW TEST:8.348 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":280,"completed":222,"skipped":3684,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:28:17.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-072e0d6c-3de1-43cb-ad90-8c76b5218bd3
STEP: Creating a pod to test consume secrets
Feb 18 17:28:17.983: INFO: Waiting up to 5m0s for pod "pod-secrets-c7e96f39-9a6f-4007-a511-85adf6b9dda3" in namespace "secrets-6084" to be "success or failure"
Feb 18 17:28:18.001: INFO: Pod "pod-secrets-c7e96f39-9a6f-4007-a511-85adf6b9dda3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.396135ms
Feb 18 17:28:20.008: INFO: Pod "pod-secrets-c7e96f39-9a6f-4007-a511-85adf6b9dda3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024032981s
Feb 18 17:28:22.018: INFO: Pod "pod-secrets-c7e96f39-9a6f-4007-a511-85adf6b9dda3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034407889s
Feb 18 17:28:24.023: INFO: Pod "pod-secrets-c7e96f39-9a6f-4007-a511-85adf6b9dda3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039817643s
Feb 18 17:28:26.030: INFO: Pod "pod-secrets-c7e96f39-9a6f-4007-a511-85adf6b9dda3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046508586s
STEP: Saw pod success
Feb 18 17:28:26.030: INFO: Pod "pod-secrets-c7e96f39-9a6f-4007-a511-85adf6b9dda3" satisfied condition "success or failure"
Feb 18 17:28:26.034: INFO: Trying to get logs from node jerma-node pod pod-secrets-c7e96f39-9a6f-4007-a511-85adf6b9dda3 container secret-volume-test: 
STEP: delete the pod
Feb 18 17:28:26.079: INFO: Waiting for pod pod-secrets-c7e96f39-9a6f-4007-a511-85adf6b9dda3 to disappear
Feb 18 17:28:26.082: INFO: Pod pod-secrets-c7e96f39-9a6f-4007-a511-85adf6b9dda3 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:28:26.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6084" for this suite.

• [SLOW TEST:8.224 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":223,"skipped":3693,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:28:26.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0218 17:29:09.450962       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 17:29:09.451: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:29:09.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8548" for this suite.

• [SLOW TEST:43.429 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":280,"completed":224,"skipped":3707,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:29:09.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 17:29:11.400: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 18 17:29:13.419: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:15.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:19.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:20.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:23.234: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:24.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:25.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:28.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:29.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:31.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643751, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 17:29:34.480: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:29:34.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5115" for this suite.
STEP: Destroying namespace "webhook-5115-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:25.218 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":280,"completed":225,"skipped":3749,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:29:34.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
Feb 18 17:29:35.436: INFO: created pod pod-service-account-defaultsa
Feb 18 17:29:35.436: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 18 17:29:35.449: INFO: created pod pod-service-account-mountsa
Feb 18 17:29:35.449: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 18 17:29:35.601: INFO: created pod pod-service-account-nomountsa
Feb 18 17:29:35.601: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 18 17:29:35.620: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 18 17:29:35.620: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 18 17:29:35.645: INFO: created pod pod-service-account-mountsa-mountspec
Feb 18 17:29:35.645: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 18 17:29:35.667: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 18 17:29:35.668: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 18 17:29:35.879: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 18 17:29:35.880: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 18 17:29:35.908: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 18 17:29:35.908: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 18 17:29:35.926: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 18 17:29:35.926: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:29:35.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4265" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":280,"completed":226,"skipped":3764,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:29:37.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 18 17:29:42.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:45.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:48.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:48.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:51.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:53.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:55.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:56.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:29:58.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:30:00.965: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:30:03.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:30:04.898: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:30:06.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:30:08.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:30:10.764: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643782, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717643781, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 18 17:30:13.795: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:30:24.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8896" for this suite.
STEP: Destroying namespace "webhook-8896-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:46.364 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":280,"completed":227,"skipped":3786,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:30:24.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Starting the proxy
Feb 18 17:30:24.372: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix193343263/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:30:24.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9048" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":280,"completed":228,"skipped":3813,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:30:24.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 18 17:30:24.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:30:32.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5122" for this suite.

• [SLOW TEST:8.232 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":280,"completed":229,"skipped":3841,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:30:32.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-88595aa2-70e9-406d-88d3-a77c93b35b7e
STEP: Creating a pod to test consume configMaps
Feb 18 17:30:32.825: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f81808d1-86d7-4d2a-841a-bb7f177c2503" in namespace "projected-7972" to be "success or failure"
Feb 18 17:30:32.833: INFO: Pod "pod-projected-configmaps-f81808d1-86d7-4d2a-841a-bb7f177c2503": Phase="Pending", Reason="", readiness=false. Elapsed: 7.995135ms
Feb 18 17:30:34.841: INFO: Pod "pod-projected-configmaps-f81808d1-86d7-4d2a-841a-bb7f177c2503": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015984942s
Feb 18 17:30:36.849: INFO: Pod "pod-projected-configmaps-f81808d1-86d7-4d2a-841a-bb7f177c2503": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02361956s
Feb 18 17:30:38.860: INFO: Pod "pod-projected-configmaps-f81808d1-86d7-4d2a-841a-bb7f177c2503": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035210191s
Feb 18 17:30:40.870: INFO: Pod "pod-projected-configmaps-f81808d1-86d7-4d2a-841a-bb7f177c2503": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045395296s
STEP: Saw pod success
Feb 18 17:30:40.871: INFO: Pod "pod-projected-configmaps-f81808d1-86d7-4d2a-841a-bb7f177c2503" satisfied condition "success or failure"
Feb 18 17:30:40.876: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-f81808d1-86d7-4d2a-841a-bb7f177c2503 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 18 17:30:41.044: INFO: Waiting for pod pod-projected-configmaps-f81808d1-86d7-4d2a-841a-bb7f177c2503 to disappear
Feb 18 17:30:41.079: INFO: Pod pod-projected-configmaps-f81808d1-86d7-4d2a-841a-bb7f177c2503 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:30:41.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7972" for this suite.

• [SLOW TEST:8.364 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":230,"skipped":3866,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:30:41.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0218 17:30:53.780011       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 17:30:53.780: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:30:53.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3128" for this suite.

• [SLOW TEST:16.044 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":280,"completed":231,"skipped":3868,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:30:57.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:31:20.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3162" for this suite.

• [SLOW TEST:23.198 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":280,"completed":232,"skipped":3884,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:31:20.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Feb 18 17:31:20.473: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 17:31:23.465: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:31:33.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9375" for this suite.

• [SLOW TEST:13.634 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":280,"completed":233,"skipped":3894,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:31:33.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-98e886e1-eea5-4141-9bca-83f31616876b
STEP: Creating a pod to test consume secrets
Feb 18 17:31:34.083: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-be4ea98f-dba9-49d6-b771-46de99be9d7d" in namespace "projected-9572" to be "success or failure"
Feb 18 17:31:34.103: INFO: Pod "pod-projected-secrets-be4ea98f-dba9-49d6-b771-46de99be9d7d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.260682ms
Feb 18 17:31:36.111: INFO: Pod "pod-projected-secrets-be4ea98f-dba9-49d6-b771-46de99be9d7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028434378s
Feb 18 17:31:38.119: INFO: Pod "pod-projected-secrets-be4ea98f-dba9-49d6-b771-46de99be9d7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036002678s
Feb 18 17:31:40.130: INFO: Pod "pod-projected-secrets-be4ea98f-dba9-49d6-b771-46de99be9d7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047387263s
Feb 18 17:31:42.143: INFO: Pod "pod-projected-secrets-be4ea98f-dba9-49d6-b771-46de99be9d7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059502361s
STEP: Saw pod success
Feb 18 17:31:42.143: INFO: Pod "pod-projected-secrets-be4ea98f-dba9-49d6-b771-46de99be9d7d" satisfied condition "success or failure"
Feb 18 17:31:42.148: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-be4ea98f-dba9-49d6-b771-46de99be9d7d container projected-secret-volume-test: 
STEP: delete the pod
Feb 18 17:31:42.208: INFO: Waiting for pod pod-projected-secrets-be4ea98f-dba9-49d6-b771-46de99be9d7d to disappear
Feb 18 17:31:42.212: INFO: Pod pod-projected-secrets-be4ea98f-dba9-49d6-b771-46de99be9d7d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:31:42.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9572" for this suite.

• [SLOW TEST:8.239 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":234,"skipped":3903,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:31:42.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9383.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9383.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 17:31:54.395: INFO: DNS probes using dns-9383/dns-test-783c4d82-d980-4b96-b9ba-c6ffd494a63c succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:31:54.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9383" for this suite.

• [SLOW TEST:12.317 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":280,"completed":235,"skipped":3920,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:31:54.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-3913
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 18 17:31:54.672: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 18 17:31:54.806: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 18 17:31:57.011: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 18 17:31:58.812: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 18 17:32:01.581: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 18 17:32:02.910: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 18 17:32:04.894: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 18 17:32:06.812: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 18 17:32:08.817: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 18 17:32:10.814: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 18 17:32:12.813: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 18 17:32:14.815: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 18 17:32:16.815: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 18 17:32:16.825: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 18 17:32:18.838: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 18 17:32:20.832: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 18 17:32:22.832: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 18 17:32:33.839: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3913 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 17:32:33.839: INFO: >>> kubeConfig: /root/.kube/config
I0218 17:32:33.925717       9 log.go:172] (0xc004292370) (0xc002a6bc20) Create stream
I0218 17:32:33.926246       9 log.go:172] (0xc004292370) (0xc002a6bc20) Stream added, broadcasting: 1
I0218 17:32:33.930883       9 log.go:172] (0xc004292370) Reply frame received for 1
I0218 17:32:33.930926       9 log.go:172] (0xc004292370) (0xc001f7a320) Create stream
I0218 17:32:33.930939       9 log.go:172] (0xc004292370) (0xc001f7a320) Stream added, broadcasting: 3
I0218 17:32:33.932339       9 log.go:172] (0xc004292370) Reply frame received for 3
I0218 17:32:33.932360       9 log.go:172] (0xc004292370) (0xc002a6bcc0) Create stream
I0218 17:32:33.932372       9 log.go:172] (0xc004292370) (0xc002a6bcc0) Stream added, broadcasting: 5
I0218 17:32:33.937620       9 log.go:172] (0xc004292370) Reply frame received for 5
I0218 17:32:34.097892       9 log.go:172] (0xc004292370) Data frame received for 3
I0218 17:32:34.098059       9 log.go:172] (0xc001f7a320) (3) Data frame handling
I0218 17:32:34.098089       9 log.go:172] (0xc001f7a320) (3) Data frame sent
I0218 17:32:34.209098       9 log.go:172] (0xc004292370) Data frame received for 1
I0218 17:32:34.209335       9 log.go:172] (0xc002a6bc20) (1) Data frame handling
I0218 17:32:34.209365       9 log.go:172] (0xc002a6bc20) (1) Data frame sent
I0218 17:32:34.209894       9 log.go:172] (0xc004292370) (0xc002a6bc20) Stream removed, broadcasting: 1
I0218 17:32:34.210100       9 log.go:172] (0xc004292370) (0xc001f7a320) Stream removed, broadcasting: 3
I0218 17:32:34.210184       9 log.go:172] (0xc004292370) (0xc002a6bcc0) Stream removed, broadcasting: 5
I0218 17:32:34.210206       9 log.go:172] (0xc004292370) Go away received
I0218 17:32:34.210250       9 log.go:172] (0xc004292370) (0xc002a6bc20) Stream removed, broadcasting: 1
I0218 17:32:34.210279       9 log.go:172] (0xc004292370) (0xc001f7a320) Stream removed, broadcasting: 3
I0218 17:32:34.210309       9 log.go:172] (0xc004292370) (0xc002a6bcc0) Stream removed, broadcasting: 5
Feb 18 17:32:34.210: INFO: Found all expected endpoints: [netserver-0]
Feb 18 17:32:34.214: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3913 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 17:32:34.214: INFO: >>> kubeConfig: /root/.kube/config
I0218 17:32:34.252094       9 log.go:172] (0xc003a708f0) (0xc002958fa0) Create stream
I0218 17:32:34.252283       9 log.go:172] (0xc003a708f0) (0xc002958fa0) Stream added, broadcasting: 1
I0218 17:32:34.258024       9 log.go:172] (0xc003a708f0) Reply frame received for 1
I0218 17:32:34.258069       9 log.go:172] (0xc003a708f0) (0xc002a6bea0) Create stream
I0218 17:32:34.258080       9 log.go:172] (0xc003a708f0) (0xc002a6bea0) Stream added, broadcasting: 3
I0218 17:32:34.259497       9 log.go:172] (0xc003a708f0) Reply frame received for 3
I0218 17:32:34.259513       9 log.go:172] (0xc003a708f0) (0xc001f7a500) Create stream
I0218 17:32:34.259520       9 log.go:172] (0xc003a708f0) (0xc001f7a500) Stream added, broadcasting: 5
I0218 17:32:34.260548       9 log.go:172] (0xc003a708f0) Reply frame received for 5
I0218 17:32:34.349013       9 log.go:172] (0xc003a708f0) Data frame received for 3
I0218 17:32:34.349267       9 log.go:172] (0xc002a6bea0) (3) Data frame handling
I0218 17:32:34.349288       9 log.go:172] (0xc002a6bea0) (3) Data frame sent
I0218 17:32:34.447949       9 log.go:172] (0xc003a708f0) (0xc002a6bea0) Stream removed, broadcasting: 3
I0218 17:32:34.448218       9 log.go:172] (0xc003a708f0) Data frame received for 1
I0218 17:32:34.448408       9 log.go:172] (0xc003a708f0) (0xc001f7a500) Stream removed, broadcasting: 5
I0218 17:32:34.448454       9 log.go:172] (0xc002958fa0) (1) Data frame handling
I0218 17:32:34.448483       9 log.go:172] (0xc002958fa0) (1) Data frame sent
I0218 17:32:34.448595       9 log.go:172] (0xc003a708f0) (0xc002958fa0) Stream removed, broadcasting: 1
I0218 17:32:34.448624       9 log.go:172] (0xc003a708f0) Go away received
I0218 17:32:34.448835       9 log.go:172] (0xc003a708f0) (0xc002958fa0) Stream removed, broadcasting: 1
I0218 17:32:34.448880       9 log.go:172] (0xc003a708f0) (0xc002a6bea0) Stream removed, broadcasting: 3
I0218 17:32:34.448942       9 log.go:172] (0xc003a708f0) (0xc001f7a500) Stream removed, broadcasting: 5
Feb 18 17:32:34.449: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:32:34.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3913" for this suite.

• [SLOW TEST:39.931 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":236,"skipped":3923,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:32:34.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 18 17:32:34.618: INFO: Waiting up to 5m0s for pod "pod-59d7920e-d48d-47dc-af67-797d32ba6082" in namespace "emptydir-1713" to be "success or failure"
Feb 18 17:32:34.623: INFO: Pod "pod-59d7920e-d48d-47dc-af67-797d32ba6082": Phase="Pending", Reason="", readiness=false. Elapsed: 5.341187ms
Feb 18 17:32:36.630: INFO: Pod "pod-59d7920e-d48d-47dc-af67-797d32ba6082": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011866011s
Feb 18 17:32:38.637: INFO: Pod "pod-59d7920e-d48d-47dc-af67-797d32ba6082": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018647402s
Feb 18 17:32:42.325: INFO: Pod "pod-59d7920e-d48d-47dc-af67-797d32ba6082": Phase="Pending", Reason="", readiness=false. Elapsed: 7.707493539s
Feb 18 17:32:44.450: INFO: Pod "pod-59d7920e-d48d-47dc-af67-797d32ba6082": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.831751109s
STEP: Saw pod success
Feb 18 17:32:44.450: INFO: Pod "pod-59d7920e-d48d-47dc-af67-797d32ba6082" satisfied condition "success or failure"
Feb 18 17:32:44.456: INFO: Trying to get logs from node jerma-node pod pod-59d7920e-d48d-47dc-af67-797d32ba6082 container test-container: 
STEP: delete the pod
Feb 18 17:32:45.275: INFO: Waiting for pod pod-59d7920e-d48d-47dc-af67-797d32ba6082 to disappear
Feb 18 17:32:45.281: INFO: Pod pod-59d7920e-d48d-47dc-af67-797d32ba6082 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:32:45.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1713" for this suite.

• [SLOW TEST:10.824 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":237,"skipped":3980,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:32:45.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6456.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6456.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6456.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6456.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6456.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6456.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 17:32:57.862: INFO: DNS probes using dns-6456/dns-test-49376a36-9b52-4571-b908-3cf23a1bdcb0 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:32:57.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6456" for this suite.

• [SLOW TEST:12.765 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":280,"completed":238,"skipped":4006,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:32:58.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 18 17:32:58.197: INFO: Waiting up to 5m0s for pod "pod-4e36f661-3ffa-4820-bb55-6ca892a5ce71" in namespace "emptydir-945" to be "success or failure"
Feb 18 17:32:58.204: INFO: Pod "pod-4e36f661-3ffa-4820-bb55-6ca892a5ce71": Phase="Pending", Reason="", readiness=false. Elapsed: 6.90273ms
Feb 18 17:33:00.280: INFO: Pod "pod-4e36f661-3ffa-4820-bb55-6ca892a5ce71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082249739s
Feb 18 17:33:02.288: INFO: Pod "pod-4e36f661-3ffa-4820-bb55-6ca892a5ce71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091143608s
Feb 18 17:33:04.294: INFO: Pod "pod-4e36f661-3ffa-4820-bb55-6ca892a5ce71": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096949859s
Feb 18 17:33:06.302: INFO: Pod "pod-4e36f661-3ffa-4820-bb55-6ca892a5ce71": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104890027s
Feb 18 17:33:08.357: INFO: Pod "pod-4e36f661-3ffa-4820-bb55-6ca892a5ce71": Phase="Pending", Reason="", readiness=false. Elapsed: 10.159288861s
Feb 18 17:33:10.372: INFO: Pod "pod-4e36f661-3ffa-4820-bb55-6ca892a5ce71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.17510685s
STEP: Saw pod success
Feb 18 17:33:10.373: INFO: Pod "pod-4e36f661-3ffa-4820-bb55-6ca892a5ce71" satisfied condition "success or failure"
Feb 18 17:33:10.376: INFO: Trying to get logs from node jerma-node pod pod-4e36f661-3ffa-4820-bb55-6ca892a5ce71 container test-container: 
STEP: delete the pod
Feb 18 17:33:10.488: INFO: Waiting for pod pod-4e36f661-3ffa-4820-bb55-6ca892a5ce71 to disappear
Feb 18 17:33:10.503: INFO: Pod pod-4e36f661-3ffa-4820-bb55-6ca892a5ce71 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:33:10.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-945" for this suite.

• [SLOW TEST:12.451 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":239,"skipped":4021,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:33:10.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 18 17:33:10.679: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.819914ms)
Feb 18 17:33:10.683: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.782782ms)
Feb 18 17:33:10.686: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.20436ms)
Feb 18 17:33:10.690: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.121737ms)
Feb 18 17:33:10.694: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.647329ms)
Feb 18 17:33:10.706: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.619397ms)
Feb 18 17:33:10.713: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.917906ms)
Feb 18 17:33:10.733: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.478419ms)
Feb 18 17:33:10.737: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.758588ms)
Feb 18 17:33:10.740: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.404473ms)
Feb 18 17:33:10.744: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.763367ms)
Feb 18 17:33:10.747: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.566411ms)
Feb 18 17:33:10.752: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.242556ms)
Feb 18 17:33:10.755: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.107771ms)
Feb 18 17:33:10.758: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.040292ms)
Feb 18 17:33:10.762: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.757079ms)
Feb 18 17:33:10.765: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.082306ms)
Feb 18 17:33:10.773: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.217136ms)
Feb 18 17:33:10.777: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.721497ms)
Feb 18 17:33:10.780: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.226129ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:33:10.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9338" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":280,"completed":240,"skipped":4028,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:33:10.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 18 17:33:10.953: INFO: Create a RollingUpdate DaemonSet
Feb 18 17:33:10.958: INFO: Check that daemon pods launch on every node of the cluster
Feb 18 17:33:11.042: INFO: Number of nodes with available pods: 0
Feb 18 17:33:11.043: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:33:12.064: INFO: Number of nodes with available pods: 0
Feb 18 17:33:12.064: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:33:13.429: INFO: Number of nodes with available pods: 0
Feb 18 17:33:13.429: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:33:14.064: INFO: Number of nodes with available pods: 0
Feb 18 17:33:14.064: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:33:15.060: INFO: Number of nodes with available pods: 0
Feb 18 17:33:15.060: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:33:16.155: INFO: Number of nodes with available pods: 0
Feb 18 17:33:16.155: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:33:18.464: INFO: Number of nodes with available pods: 0
Feb 18 17:33:18.464: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:33:19.686: INFO: Number of nodes with available pods: 0
Feb 18 17:33:19.687: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:33:20.079: INFO: Number of nodes with available pods: 1
Feb 18 17:33:20.080: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 18 17:33:21.054: INFO: Number of nodes with available pods: 2
Feb 18 17:33:21.054: INFO: Number of running nodes: 2, number of available pods: 2
Feb 18 17:33:21.054: INFO: Update the DaemonSet to trigger a rollout
Feb 18 17:33:21.068: INFO: Updating DaemonSet daemon-set
Feb 18 17:33:28.149: INFO: Roll back the DaemonSet before rollout is complete
Feb 18 17:33:28.156: INFO: Updating DaemonSet daemon-set
Feb 18 17:33:28.156: INFO: Make sure DaemonSet rollback is complete
Feb 18 17:33:28.564: INFO: Wrong image for pod: daemon-set-pggbl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 18 17:33:28.564: INFO: Pod daemon-set-pggbl is not available
Feb 18 17:33:29.620: INFO: Wrong image for pod: daemon-set-pggbl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 18 17:33:29.621: INFO: Pod daemon-set-pggbl is not available
Feb 18 17:33:30.585: INFO: Wrong image for pod: daemon-set-pggbl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 18 17:33:30.585: INFO: Pod daemon-set-pggbl is not available
Feb 18 17:33:31.584: INFO: Wrong image for pod: daemon-set-pggbl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 18 17:33:31.584: INFO: Pod daemon-set-pggbl is not available
Feb 18 17:33:32.639: INFO: Pod daemon-set-m5xzk is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-650, will wait for the garbage collector to delete the pods
Feb 18 17:33:32.770: INFO: Deleting DaemonSet.extensions daemon-set took: 11.411558ms
Feb 18 17:33:33.571: INFO: Terminating DaemonSet.extensions daemon-set pods took: 801.156404ms
Feb 18 17:33:40.579: INFO: Number of nodes with available pods: 0
Feb 18 17:33:40.579: INFO: Number of running nodes: 0, number of available pods: 0
Feb 18 17:33:40.583: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-650/daemonsets","resourceVersion":"9226504"},"items":null}

Feb 18 17:33:40.587: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-650/pods","resourceVersion":"9226504"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:33:40.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-650" for this suite.

• [SLOW TEST:29.863 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":280,"completed":241,"skipped":4034,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:33:40.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 18 17:33:40.746: INFO: Creating ReplicaSet my-hostname-basic-69e5587b-0f16-41d7-94c9-d531c998986b
Feb 18 17:33:40.805: INFO: Pod name my-hostname-basic-69e5587b-0f16-41d7-94c9-d531c998986b: Found 0 pods out of 1
Feb 18 17:33:45.823: INFO: Pod name my-hostname-basic-69e5587b-0f16-41d7-94c9-d531c998986b: Found 1 pods out of 1
Feb 18 17:33:45.823: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-69e5587b-0f16-41d7-94c9-d531c998986b" is running
Feb 18 17:33:47.839: INFO: Pod "my-hostname-basic-69e5587b-0f16-41d7-94c9-d531c998986b-bhh9p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 17:33:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 17:33:40 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-69e5587b-0f16-41d7-94c9-d531c998986b]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 17:33:40 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-69e5587b-0f16-41d7-94c9-d531c998986b]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 17:33:40 +0000 UTC Reason: Message:}])
Feb 18 17:33:47.839: INFO: Trying to dial the pod
Feb 18 17:33:52.880: INFO: Controller my-hostname-basic-69e5587b-0f16-41d7-94c9-d531c998986b: Got expected result from replica 1 [my-hostname-basic-69e5587b-0f16-41d7-94c9-d531c998986b-bhh9p]: "my-hostname-basic-69e5587b-0f16-41d7-94c9-d531c998986b-bhh9p", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:33:52.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7491" for this suite.

• [SLOW TEST:12.241 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":242,"skipped":4044,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:33:52.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1598
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 18 17:33:53.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-885'
Feb 18 17:33:53.274: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 18 17:33:53.274: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1604
Feb 18 17:33:55.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-885'
Feb 18 17:33:55.497: INFO: stderr: ""
Feb 18 17:33:55.497: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:33:55.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-885" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":280,"completed":243,"skipped":4049,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:33:55.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 18 17:33:55.658: INFO: Number of nodes with available pods: 0
Feb 18 17:33:55.658: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:33:56.667: INFO: Number of nodes with available pods: 0
Feb 18 17:33:56.668: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:33:57.673: INFO: Number of nodes with available pods: 0
Feb 18 17:33:57.673: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:33:58.843: INFO: Number of nodes with available pods: 0
Feb 18 17:33:58.843: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:33:59.931: INFO: Number of nodes with available pods: 0
Feb 18 17:33:59.931: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:34:00.678: INFO: Number of nodes with available pods: 0
Feb 18 17:34:00.678: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:34:02.896: INFO: Number of nodes with available pods: 0
Feb 18 17:34:02.896: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:34:03.672: INFO: Number of nodes with available pods: 0
Feb 18 17:34:03.672: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:34:04.839: INFO: Number of nodes with available pods: 0
Feb 18 17:34:04.840: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:34:05.672: INFO: Number of nodes with available pods: 0
Feb 18 17:34:05.672: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:34:06.669: INFO: Number of nodes with available pods: 1
Feb 18 17:34:06.669: INFO: Node jerma-node is running more than one daemon pod
Feb 18 17:34:07.680: INFO: Number of nodes with available pods: 2
Feb 18 17:34:07.681: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 18 17:34:07.714: INFO: Number of nodes with available pods: 1
Feb 18 17:34:07.714: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 18 17:34:08.745: INFO: Number of nodes with available pods: 1
Feb 18 17:34:08.746: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 18 17:34:10.008: INFO: Number of nodes with available pods: 1
Feb 18 17:34:10.008: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 18 17:34:10.728: INFO: Number of nodes with available pods: 1
Feb 18 17:34:10.728: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 18 17:34:11.728: INFO: Number of nodes with available pods: 1
Feb 18 17:34:11.728: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 18 17:34:12.726: INFO: Number of nodes with available pods: 1
Feb 18 17:34:12.726: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 18 17:34:13.799: INFO: Number of nodes with available pods: 1
Feb 18 17:34:13.799: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 18 17:34:14.865: INFO: Number of nodes with available pods: 1
Feb 18 17:34:14.866: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 18 17:34:15.742: INFO: Number of nodes with available pods: 1
Feb 18 17:34:15.742: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 18 17:34:16.733: INFO: Number of nodes with available pods: 1
Feb 18 17:34:16.733: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 18 17:34:21.469: INFO: Number of nodes with available pods: 1
Feb 18 17:34:21.469: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 18 17:34:21.729: INFO: Number of nodes with available pods: 1
Feb 18 17:34:21.730: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 18 17:34:22.728: INFO: Number of nodes with available pods: 1
Feb 18 17:34:22.728: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 18 17:34:23.727: INFO: Number of nodes with available pods: 2
Feb 18 17:34:23.727: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7786, will wait for the garbage collector to delete the pods
Feb 18 17:34:23.795: INFO: Deleting DaemonSet.extensions daemon-set took: 10.588461ms
Feb 18 17:34:24.096: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.99952ms
Feb 18 17:34:43.202: INFO: Number of nodes with available pods: 0
Feb 18 17:34:43.202: INFO: Number of running nodes: 0, number of available pods: 0
Feb 18 17:34:43.205: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7786/daemonsets","resourceVersion":"9226777"},"items":null}

Feb 18 17:34:43.209: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7786/pods","resourceVersion":"9226777"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:34:43.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7786" for this suite.

• [SLOW TEST:47.755 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":280,"completed":244,"skipped":4062,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:34:43.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service nodeport-test with type=NodePort in namespace services-4495
STEP: creating replication controller nodeport-test in namespace services-4495
I0218 17:34:43.410947       9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-4495, replica count: 2
I0218 17:34:46.461887       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 17:34:49.462735       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 17:34:52.463590       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 17:34:55.464558       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 18 17:34:55.464: INFO: Creating new exec pod
Feb 18 17:35:04.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4495 execpod8dqmw -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Feb 18 17:35:05.129: INFO: stderr: "I0218 17:35:04.723377    4285 log.go:172] (0xc0008ee160) (0xc000417cc0) Create stream\nI0218 17:35:04.724276    4285 log.go:172] (0xc0008ee160) (0xc000417cc0) Stream added, broadcasting: 1\nI0218 17:35:04.760944    4285 log.go:172] (0xc0008ee160) Reply frame received for 1\nI0218 17:35:04.761150    4285 log.go:172] (0xc0008ee160) (0xc0009cc000) Create stream\nI0218 17:35:04.761173    4285 log.go:172] (0xc0008ee160) (0xc0009cc000) Stream added, broadcasting: 3\nI0218 17:35:04.772240    4285 log.go:172] (0xc0008ee160) Reply frame received for 3\nI0218 17:35:04.772307    4285 log.go:172] (0xc0008ee160) (0xc0009cc0a0) Create stream\nI0218 17:35:04.772316    4285 log.go:172] (0xc0008ee160) (0xc0009cc0a0) Stream added, broadcasting: 5\nI0218 17:35:04.773951    4285 log.go:172] (0xc0008ee160) Reply frame received for 5\nI0218 17:35:04.968637    4285 log.go:172] (0xc0008ee160) Data frame received for 5\nI0218 17:35:04.969298    4285 log.go:172] (0xc0009cc0a0) (5) Data frame handling\nI0218 17:35:04.969369    4285 log.go:172] (0xc0009cc0a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0218 17:35:04.980221    4285 log.go:172] (0xc0008ee160) Data frame received for 5\nI0218 17:35:04.980261    4285 log.go:172] (0xc0009cc0a0) (5) Data frame handling\nI0218 17:35:04.980278    4285 log.go:172] (0xc0009cc0a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0218 17:35:05.119924    4285 log.go:172] (0xc0008ee160) (0xc0009cc000) Stream removed, broadcasting: 3\nI0218 17:35:05.120012    4285 log.go:172] (0xc0008ee160) Data frame received for 1\nI0218 17:35:05.120032    4285 log.go:172] (0xc000417cc0) (1) Data frame handling\nI0218 17:35:05.120043    4285 log.go:172] (0xc000417cc0) (1) Data frame sent\nI0218 17:35:05.120054    4285 log.go:172] (0xc0008ee160) (0xc000417cc0) Stream removed, broadcasting: 1\nI0218 17:35:05.120142    4285 log.go:172] (0xc0008ee160) (0xc0009cc0a0) Stream removed, broadcasting: 5\nI0218 17:35:05.120161    4285 log.go:172] (0xc0008ee160) Go away received\nI0218 17:35:05.120602    4285 log.go:172] (0xc0008ee160) (0xc000417cc0) Stream removed, broadcasting: 1\nI0218 17:35:05.120611    4285 log.go:172] (0xc0008ee160) (0xc0009cc000) Stream removed, broadcasting: 3\nI0218 17:35:05.120615    4285 log.go:172] (0xc0008ee160) (0xc0009cc0a0) Stream removed, broadcasting: 5\n"
Feb 18 17:35:05.130: INFO: stdout: ""
Feb 18 17:35:05.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4495 execpod8dqmw -- /bin/sh -x -c nc -zv -t -w 2 10.96.61.98 80'
Feb 18 17:35:05.487: INFO: stderr: "I0218 17:35:05.320864    4301 log.go:172] (0xc000b36210) (0xc000c2a1e0) Create stream\nI0218 17:35:05.321037    4301 log.go:172] (0xc000b36210) (0xc000c2a1e0) Stream added, broadcasting: 1\nI0218 17:35:05.324242    4301 log.go:172] (0xc000b36210) Reply frame received for 1\nI0218 17:35:05.324400    4301 log.go:172] (0xc000b36210) (0xc000af41e0) Create stream\nI0218 17:35:05.324425    4301 log.go:172] (0xc000b36210) (0xc000af41e0) Stream added, broadcasting: 3\nI0218 17:35:05.326719    4301 log.go:172] (0xc000b36210) Reply frame received for 3\nI0218 17:35:05.326749    4301 log.go:172] (0xc000b36210) (0xc000c2a280) Create stream\nI0218 17:35:05.326763    4301 log.go:172] (0xc000b36210) (0xc000c2a280) Stream added, broadcasting: 5\nI0218 17:35:05.328113    4301 log.go:172] (0xc000b36210) Reply frame received for 5\nI0218 17:35:05.387576    4301 log.go:172] (0xc000b36210) Data frame received for 5\nI0218 17:35:05.387622    4301 log.go:172] (0xc000c2a280) (5) Data frame handling\nI0218 17:35:05.387648    4301 log.go:172] (0xc000c2a280) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.61.98 80\nConnection to 10.96.61.98 80 port [tcp/http] succeeded!\nI0218 17:35:05.473950    4301 log.go:172] (0xc000b36210) Data frame received for 1\nI0218 17:35:05.474013    4301 log.go:172] (0xc000b36210) (0xc000af41e0) Stream removed, broadcasting: 3\nI0218 17:35:05.474104    4301 log.go:172] (0xc000c2a1e0) (1) Data frame handling\nI0218 17:35:05.474156    4301 log.go:172] (0xc000c2a1e0) (1) Data frame sent\nI0218 17:35:05.474292    4301 log.go:172] (0xc000b36210) (0xc000c2a1e0) Stream removed, broadcasting: 1\nI0218 17:35:05.476401    4301 log.go:172] (0xc000b36210) (0xc000c2a280) Stream removed, broadcasting: 5\nI0218 17:35:05.476488    4301 log.go:172] (0xc000b36210) Go away received\nI0218 17:35:05.476627    4301 log.go:172] (0xc000b36210) (0xc000c2a1e0) Stream removed, broadcasting: 1\nI0218 17:35:05.476663    4301 log.go:172] (0xc000b36210) (0xc000af41e0) Stream removed, broadcasting: 3\nI0218 17:35:05.476684    4301 log.go:172] (0xc000b36210) (0xc000c2a280) Stream removed, broadcasting: 5\n"
Feb 18 17:35:05.488: INFO: stdout: ""
Feb 18 17:35:05.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4495 execpod8dqmw -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32288'
Feb 18 17:35:05.765: INFO: stderr: "I0218 17:35:05.610657    4321 log.go:172] (0xc0008929a0) (0xc000b1c000) Create stream\nI0218 17:35:05.610848    4321 log.go:172] (0xc0008929a0) (0xc000b1c000) Stream added, broadcasting: 1\nI0218 17:35:05.614269    4321 log.go:172] (0xc0008929a0) Reply frame received for 1\nI0218 17:35:05.614300    4321 log.go:172] (0xc0008929a0) (0xc0005d3b80) Create stream\nI0218 17:35:05.614311    4321 log.go:172] (0xc0008929a0) (0xc0005d3b80) Stream added, broadcasting: 3\nI0218 17:35:05.615433    4321 log.go:172] (0xc0008929a0) Reply frame received for 3\nI0218 17:35:05.615454    4321 log.go:172] (0xc0008929a0) (0xc000706000) Create stream\nI0218 17:35:05.615461    4321 log.go:172] (0xc0008929a0) (0xc000706000) Stream added, broadcasting: 5\nI0218 17:35:05.616335    4321 log.go:172] (0xc0008929a0) Reply frame received for 5\nI0218 17:35:05.693072    4321 log.go:172] (0xc0008929a0) Data frame received for 5\nI0218 17:35:05.693362    4321 log.go:172] (0xc000706000) (5) Data frame handling\nI0218 17:35:05.693383    4321 log.go:172] (0xc000706000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32288\nI0218 17:35:05.694046    4321 log.go:172] (0xc0008929a0) Data frame received for 5\nI0218 17:35:05.694101    4321 log.go:172] (0xc000706000) (5) Data frame handling\nI0218 17:35:05.694113    4321 log.go:172] (0xc000706000) (5) Data frame sent\nConnection to 10.96.2.250 32288 port [tcp/32288] succeeded!\nI0218 17:35:05.754616    4321 log.go:172] (0xc0008929a0) (0xc0005d3b80) Stream removed, broadcasting: 3\nI0218 17:35:05.754683    4321 log.go:172] (0xc0008929a0) Data frame received for 1\nI0218 17:35:05.754697    4321 log.go:172] (0xc000b1c000) (1) Data frame handling\nI0218 17:35:05.754709    4321 log.go:172] (0xc0008929a0) (0xc000706000) Stream removed, broadcasting: 5\nI0218 17:35:05.754725    4321 log.go:172] (0xc000b1c000) (1) Data frame sent\nI0218 17:35:05.754735    4321 log.go:172] (0xc0008929a0) (0xc000b1c000) Stream removed, broadcasting: 1\nI0218 17:35:05.754749    4321 log.go:172] (0xc0008929a0) Go away received\nI0218 17:35:05.755207    4321 log.go:172] (0xc0008929a0) (0xc000b1c000) Stream removed, broadcasting: 1\nI0218 17:35:05.755222    4321 log.go:172] (0xc0008929a0) (0xc0005d3b80) Stream removed, broadcasting: 3\nI0218 17:35:05.755228    4321 log.go:172] (0xc0008929a0) (0xc000706000) Stream removed, broadcasting: 5\n"
Feb 18 17:35:05.766: INFO: stdout: ""
Feb 18 17:35:05.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4495 execpod8dqmw -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32288'
Feb 18 17:35:06.056: INFO: stderr: "I0218 17:35:05.897053    4342 log.go:172] (0xc000b23290) (0xc000b641e0) Create stream\nI0218 17:35:05.897142    4342 log.go:172] (0xc000b23290) (0xc000b641e0) Stream added, broadcasting: 1\nI0218 17:35:05.900115    4342 log.go:172] (0xc000b23290) Reply frame received for 1\nI0218 17:35:05.900228    4342 log.go:172] (0xc000b23290) (0xc000ae00a0) Create stream\nI0218 17:35:05.900242    4342 log.go:172] (0xc000b23290) (0xc000ae00a0) Stream added, broadcasting: 3\nI0218 17:35:05.902213    4342 log.go:172] (0xc000b23290) Reply frame received for 3\nI0218 17:35:05.902249    4342 log.go:172] (0xc000b23290) (0xc0009e81e0) Create stream\nI0218 17:35:05.902263    4342 log.go:172] (0xc000b23290) (0xc0009e81e0) Stream added, broadcasting: 5\nI0218 17:35:05.903972    4342 log.go:172] (0xc000b23290) Reply frame received for 5\nI0218 17:35:05.970664    4342 log.go:172] (0xc000b23290) Data frame received for 5\nI0218 17:35:05.970711    4342 log.go:172] (0xc0009e81e0) (5) Data frame handling\nI0218 17:35:05.970761    4342 log.go:172] (0xc0009e81e0) (5) Data frame sent\nI0218 17:35:05.970778    4342 log.go:172] (0xc000b23290) Data frame received for 5\nI0218 17:35:05.970790    4342 log.go:172] (0xc0009e81e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.1.234 32288\nI0218 17:35:05.970827    4342 log.go:172] (0xc0009e81e0) (5) Data frame sent\nI0218 17:35:05.976410    4342 log.go:172] (0xc000b23290) Data frame received for 5\nI0218 17:35:05.976436    4342 log.go:172] (0xc0009e81e0) (5) Data frame handling\nI0218 17:35:05.976454    4342 log.go:172] (0xc0009e81e0) (5) Data frame sent\nConnection to 10.96.1.234 32288 port [tcp/32288] succeeded!\nI0218 17:35:06.050951    4342 log.go:172] (0xc000b23290) Data frame received for 1\nI0218 17:35:06.051007    4342 log.go:172] (0xc000b23290) (0xc0009e81e0) Stream removed, broadcasting: 5\nI0218 17:35:06.051040    4342 log.go:172] (0xc000b641e0) (1) Data frame handling\nI0218 17:35:06.051058    4342 log.go:172] (0xc000b641e0) (1) Data frame sent\nI0218 17:35:06.051083    4342 log.go:172] (0xc000b23290) (0xc000ae00a0) Stream removed, broadcasting: 3\nI0218 17:35:06.051114    4342 log.go:172] (0xc000b23290) (0xc000b641e0) Stream removed, broadcasting: 1\nI0218 17:35:06.051132    4342 log.go:172] (0xc000b23290) Go away received\nI0218 17:35:06.051672    4342 log.go:172] (0xc000b23290) (0xc000b641e0) Stream removed, broadcasting: 1\nI0218 17:35:06.051686    4342 log.go:172] (0xc000b23290) (0xc000ae00a0) Stream removed, broadcasting: 3\nI0218 17:35:06.051693    4342 log.go:172] (0xc000b23290) (0xc0009e81e0) Stream removed, broadcasting: 5\n"
Feb 18 17:35:06.056: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:35:06.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4495" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:22.800 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":280,"completed":245,"skipped":4079,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:35:06.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 18 17:35:06.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Feb 18 17:35:06.747: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-18T17:35:06Z generation:1 name:name1 resourceVersion:9226903 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ba469382-3024-4a64-9daf-4b85019425ea] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Feb 18 17:35:16.756: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-18T17:35:16Z generation:1 name:name2 resourceVersion:9226954 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6ff368a0-479a-4853-8f09-c76f2bd2ecd3] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Feb 18 17:35:26.764: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-18T17:35:06Z generation:2 name:name1 resourceVersion:9226974 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ba469382-3024-4a64-9daf-4b85019425ea] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Feb 18 17:35:36.777: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-18T17:35:16Z generation:2 name:name2 resourceVersion:9226998 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6ff368a0-479a-4853-8f09-c76f2bd2ecd3] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Feb 18 17:35:46.796: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-18T17:35:06Z generation:2 name:name1 resourceVersion:9227022 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ba469382-3024-4a64-9daf-4b85019425ea] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Feb 18 17:35:56.811: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-18T17:35:16Z generation:2 name:name2 resourceVersion:9227044 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6ff368a0-479a-4853-8f09-c76f2bd2ecd3] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:36:07.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-1728" for this suite.

• [SLOW TEST:61.283 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":280,"completed":246,"skipped":4096,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:36:07.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-t5t4
STEP: Creating a pod to test atomic-volume-subpath
Feb 18 17:36:07.476: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-t5t4" in namespace "subpath-3914" to be "success or failure"
Feb 18 17:36:07.484: INFO: Pod "pod-subpath-test-configmap-t5t4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.389944ms
Feb 18 17:36:09.492: INFO: Pod "pod-subpath-test-configmap-t5t4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016075519s
Feb 18 17:36:11.509: INFO: Pod "pod-subpath-test-configmap-t5t4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033097805s
Feb 18 17:36:13.552: INFO: Pod "pod-subpath-test-configmap-t5t4": Phase="Running", Reason="", readiness=true. Elapsed: 6.076102298s
Feb 18 17:36:15.560: INFO: Pod "pod-subpath-test-configmap-t5t4": Phase="Running", Reason="", readiness=true. Elapsed: 8.084195184s
Feb 18 17:36:17.581: INFO: Pod "pod-subpath-test-configmap-t5t4": Phase="Running", Reason="", readiness=true. Elapsed: 10.104659047s
Feb 18 17:36:19.587: INFO: Pod "pod-subpath-test-configmap-t5t4": Phase="Running", Reason="", readiness=true. Elapsed: 12.111446683s
Feb 18 17:36:21.604: INFO: Pod "pod-subpath-test-configmap-t5t4": Phase="Running", Reason="", readiness=true. Elapsed: 14.128612151s
Feb 18 17:36:23.615: INFO: Pod "pod-subpath-test-configmap-t5t4": Phase="Running", Reason="", readiness=true. Elapsed: 16.139377791s
Feb 18 17:36:25.627: INFO: Pod "pod-subpath-test-configmap-t5t4": Phase="Running", Reason="", readiness=true. Elapsed: 18.151005034s
Feb 18 17:36:27.635: INFO: Pod "pod-subpath-test-configmap-t5t4": Phase="Running", Reason="", readiness=true. Elapsed: 20.158754931s
Feb 18 17:36:29.641: INFO: Pod "pod-subpath-test-configmap-t5t4": Phase="Running", Reason="", readiness=true. Elapsed: 22.165618611s
Feb 18 17:36:31.652: INFO: Pod "pod-subpath-test-configmap-t5t4": Phase="Running", Reason="", readiness=true. Elapsed: 24.175741496s
Feb 18 17:36:33.665: INFO: Pod "pod-subpath-test-configmap-t5t4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.188908925s
STEP: Saw pod success
Feb 18 17:36:33.665: INFO: Pod "pod-subpath-test-configmap-t5t4" satisfied condition "success or failure"
Feb 18 17:36:33.670: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-t5t4 container test-container-subpath-configmap-t5t4: 
STEP: delete the pod
Feb 18 17:36:33.764: INFO: Waiting for pod pod-subpath-test-configmap-t5t4 to disappear
Feb 18 17:36:33.773: INFO: Pod pod-subpath-test-configmap-t5t4 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-t5t4
Feb 18 17:36:33.774: INFO: Deleting pod "pod-subpath-test-configmap-t5t4" in namespace "subpath-3914"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:36:33.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3914" for this suite.

• [SLOW TEST:26.452 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":280,"completed":247,"skipped":4100,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:36:33.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name secret-emptykey-test-24d2fa4f-3228-41a2-ab1c-b1de8aadd82a
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:36:34.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7608" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":280,"completed":248,"skipped":4118,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:36:34.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:36:53.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5744" for this suite.
STEP: Destroying namespace "nsdeletetest-8920" for this suite.
Feb 18 17:36:53.484: INFO: Namespace nsdeletetest-8920 was already deleted
STEP: Destroying namespace "nsdeletetest-4662" for this suite.

• [SLOW TEST:19.313 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":280,"completed":249,"skipped":4119,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:36:53.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 18 17:36:53.560: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb 18 17:37:06.685: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:37:06.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5505" for this suite.

• [SLOW TEST:13.215 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":280,"completed":250,"skipped":4128,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:37:06.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 18 17:37:06.822: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 18 17:37:06.873: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 18 17:37:11.940: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 18 17:37:13.957: INFO: Creating deployment "test-rolling-update-deployment"
Feb 18 17:37:13.964: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 18 17:37:14.052: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 18 17:37:16.060: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 18 17:37:16.062: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717644234, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717644234, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717644234, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717644234, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:37:18.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717644234, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717644234, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717644234, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717644234, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 17:37:20.095: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 18 17:37:20.109: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-1711 /apis/apps/v1/namespaces/deployment-1711/deployments/test-rolling-update-deployment 829e21fb-0dd4-47d6-8f96-880ed133d1ea 9227402 1 2020-02-18 17:37:13 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0066418e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-18 17:37:14 +0000 UTC,LastTransitionTime:2020-02-18 17:37:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-02-18 17:37:20 +0000 UTC,LastTransitionTime:2020-02-18 17:37:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 18 17:37:20.112: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-1711 /apis/apps/v1/namespaces/deployment-1711/replicasets/test-rolling-update-deployment-67cf4f6444 ed46ef5c-59e3-4c03-81d1-ff26bb7d06ec 9227392 1 2020-02-18 17:37:13 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 829e21fb-0dd4-47d6-8f96-880ed133d1ea 0xc003da8817 0xc003da8818}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003da8888  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 18 17:37:20.112: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 18 17:37:20.112: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-1711 /apis/apps/v1/namespaces/deployment-1711/replicasets/test-rolling-update-controller 4e84b8bb-eaa9-40f2-affe-b92cc333b1e0 9227401 2 2020-02-18 17:37:06 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 829e21fb-0dd4-47d6-8f96-880ed133d1ea 0xc003da8747 0xc003da8748}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003da87a8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 18 17:37:20.115: INFO: Pod "test-rolling-update-deployment-67cf4f6444-7h6k9" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-7h6k9 test-rolling-update-deployment-67cf4f6444- deployment-1711 /api/v1/namespaces/deployment-1711/pods/test-rolling-update-deployment-67cf4f6444-7h6k9 71ed6eb3-34da-47ae-bedf-55ae77d17a7b 9227391 0 2020-02-18 17:37:14 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 ed46ef5c-59e3-4c03-81d1-ff26bb7d06ec 0xc003da8f47 0xc003da8f48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s98g7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s98g7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s98g7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 17:37:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 17:37:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 17:37:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-18 17:37:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-18 17:37:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-18 17:37:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://3af94896a46ef97bbedb5a5c0131a1fceb7e1c8b7dbf5428f19652a9f1bf96e4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:37:20.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1711" for this suite.

• [SLOW TEST:13.415 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":251,"skipped":4132,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:37:20.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-778565e3-386a-4370-9bb7-b19f9cb2fdd2
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-778565e3-386a-4370-9bb7-b19f9cb2fdd2
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:37:36.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-355" for this suite.

• [SLOW TEST:16.474 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":252,"skipped":4140,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:37:36.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-3c45230d-12f3-43ab-ac2a-811e5f6acffe
STEP: Creating a pod to test consume configMaps
Feb 18 17:37:36.729: INFO: Waiting up to 5m0s for pod "pod-configmaps-57e12113-2784-489b-832b-8a0a6d888326" in namespace "configmap-5134" to be "success or failure"
Feb 18 17:37:36.736: INFO: Pod "pod-configmaps-57e12113-2784-489b-832b-8a0a6d888326": Phase="Pending", Reason="", readiness=false. Elapsed: 6.357352ms
Feb 18 17:37:38.752: INFO: Pod "pod-configmaps-57e12113-2784-489b-832b-8a0a6d888326": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022731609s
Feb 18 17:37:40.762: INFO: Pod "pod-configmaps-57e12113-2784-489b-832b-8a0a6d888326": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032158977s
Feb 18 17:37:42.769: INFO: Pod "pod-configmaps-57e12113-2784-489b-832b-8a0a6d888326": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039281748s
Feb 18 17:37:44.781: INFO: Pod "pod-configmaps-57e12113-2784-489b-832b-8a0a6d888326": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05171844s
Feb 18 17:37:46.787: INFO: Pod "pod-configmaps-57e12113-2784-489b-832b-8a0a6d888326": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057213115s
STEP: Saw pod success
Feb 18 17:37:46.787: INFO: Pod "pod-configmaps-57e12113-2784-489b-832b-8a0a6d888326" satisfied condition "success or failure"
Feb 18 17:37:46.789: INFO: Trying to get logs from node jerma-node pod pod-configmaps-57e12113-2784-489b-832b-8a0a6d888326 container configmap-volume-test: 
STEP: delete the pod
Feb 18 17:37:46.966: INFO: Waiting for pod pod-configmaps-57e12113-2784-489b-832b-8a0a6d888326 to disappear
Feb 18 17:37:46.980: INFO: Pod pod-configmaps-57e12113-2784-489b-832b-8a0a6d888326 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:37:46.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5134" for this suite.

• [SLOW TEST:10.394 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":253,"skipped":4167,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:37:46.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override arguments
Feb 18 17:37:47.379: INFO: Waiting up to 5m0s for pod "client-containers-9a2368e5-e879-4f6a-a2a8-b44a7426495c" in namespace "containers-242" to be "success or failure"
Feb 18 17:37:47.528: INFO: Pod "client-containers-9a2368e5-e879-4f6a-a2a8-b44a7426495c": Phase="Pending", Reason="", readiness=false. Elapsed: 148.88294ms
Feb 18 17:37:49.535: INFO: Pod "client-containers-9a2368e5-e879-4f6a-a2a8-b44a7426495c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156520046s
Feb 18 17:37:51.541: INFO: Pod "client-containers-9a2368e5-e879-4f6a-a2a8-b44a7426495c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162609193s
Feb 18 17:37:53.613: INFO: Pod "client-containers-9a2368e5-e879-4f6a-a2a8-b44a7426495c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.234078248s
Feb 18 17:37:55.623: INFO: Pod "client-containers-9a2368e5-e879-4f6a-a2a8-b44a7426495c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.244092147s
STEP: Saw pod success
Feb 18 17:37:55.623: INFO: Pod "client-containers-9a2368e5-e879-4f6a-a2a8-b44a7426495c" satisfied condition "success or failure"
Feb 18 17:37:55.627: INFO: Trying to get logs from node jerma-node pod client-containers-9a2368e5-e879-4f6a-a2a8-b44a7426495c container test-container: 
STEP: delete the pod
Feb 18 17:37:55.712: INFO: Waiting for pod client-containers-9a2368e5-e879-4f6a-a2a8-b44a7426495c to disappear
Feb 18 17:37:55.780: INFO: Pod client-containers-9a2368e5-e879-4f6a-a2a8-b44a7426495c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:37:55.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-242" for this suite.

• [SLOW TEST:8.802 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":280,"completed":254,"skipped":4175,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:37:55.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-16b3e64d-2604-42a9-8185-1161e8be1a95
STEP: Creating a pod to test consume configMaps
Feb 18 17:37:56.001: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cb7e5794-0183-4fb7-8144-829d81083fd0" in namespace "projected-1593" to be "success or failure"
Feb 18 17:37:56.040: INFO: Pod "pod-projected-configmaps-cb7e5794-0183-4fb7-8144-829d81083fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 38.968743ms
Feb 18 17:37:58.047: INFO: Pod "pod-projected-configmaps-cb7e5794-0183-4fb7-8144-829d81083fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045727718s
Feb 18 17:38:00.055: INFO: Pod "pod-projected-configmaps-cb7e5794-0183-4fb7-8144-829d81083fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053259028s
Feb 18 17:38:02.065: INFO: Pod "pod-projected-configmaps-cb7e5794-0183-4fb7-8144-829d81083fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064121652s
Feb 18 17:38:04.081: INFO: Pod "pod-projected-configmaps-cb7e5794-0183-4fb7-8144-829d81083fd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080066137s
STEP: Saw pod success
Feb 18 17:38:04.082: INFO: Pod "pod-projected-configmaps-cb7e5794-0183-4fb7-8144-829d81083fd0" satisfied condition "success or failure"
Feb 18 17:38:04.089: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-cb7e5794-0183-4fb7-8144-829d81083fd0 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 18 17:38:04.211: INFO: Waiting for pod pod-projected-configmaps-cb7e5794-0183-4fb7-8144-829d81083fd0 to disappear
Feb 18 17:38:04.225: INFO: Pod pod-projected-configmaps-cb7e5794-0183-4fb7-8144-829d81083fd0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:38:04.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1593" for this suite.

• [SLOW TEST:8.434 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":255,"skipped":4193,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:38:04.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's command
Feb 18 17:38:04.328: INFO: Waiting up to 5m0s for pod "var-expansion-7fd883a6-3930-4983-ada7-0b4bd1b011c1" in namespace "var-expansion-2195" to be "success or failure"
Feb 18 17:38:04.333: INFO: Pod "var-expansion-7fd883a6-3930-4983-ada7-0b4bd1b011c1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.793869ms
Feb 18 17:38:06.732: INFO: Pod "var-expansion-7fd883a6-3930-4983-ada7-0b4bd1b011c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.404016014s
Feb 18 17:38:08.746: INFO: Pod "var-expansion-7fd883a6-3930-4983-ada7-0b4bd1b011c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.418190043s
Feb 18 17:38:10.755: INFO: Pod "var-expansion-7fd883a6-3930-4983-ada7-0b4bd1b011c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427747682s
Feb 18 17:38:12.763: INFO: Pod "var-expansion-7fd883a6-3930-4983-ada7-0b4bd1b011c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.435087922s
STEP: Saw pod success
Feb 18 17:38:12.763: INFO: Pod "var-expansion-7fd883a6-3930-4983-ada7-0b4bd1b011c1" satisfied condition "success or failure"
Feb 18 17:38:12.770: INFO: Trying to get logs from node jerma-node pod var-expansion-7fd883a6-3930-4983-ada7-0b4bd1b011c1 container dapi-container: 
STEP: delete the pod
Feb 18 17:38:13.292: INFO: Waiting for pod var-expansion-7fd883a6-3930-4983-ada7-0b4bd1b011c1 to disappear
Feb 18 17:38:13.531: INFO: Pod var-expansion-7fd883a6-3930-4983-ada7-0b4bd1b011c1 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:38:13.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2195" for this suite.

• [SLOW TEST:9.316 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":280,"completed":256,"skipped":4236,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:38:13.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 18 17:38:13.843: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92e31e60-0070-477b-bf10-d8351ebe6f8c" in namespace "downward-api-655" to be "success or failure"
Feb 18 17:38:13.850: INFO: Pod "downwardapi-volume-92e31e60-0070-477b-bf10-d8351ebe6f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.331536ms
Feb 18 17:38:15.864: INFO: Pod "downwardapi-volume-92e31e60-0070-477b-bf10-d8351ebe6f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02070338s
Feb 18 17:38:17.874: INFO: Pod "downwardapi-volume-92e31e60-0070-477b-bf10-d8351ebe6f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030569404s
Feb 18 17:38:19.882: INFO: Pod "downwardapi-volume-92e31e60-0070-477b-bf10-d8351ebe6f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038987546s
Feb 18 17:38:21.890: INFO: Pod "downwardapi-volume-92e31e60-0070-477b-bf10-d8351ebe6f8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046712871s
STEP: Saw pod success
Feb 18 17:38:21.890: INFO: Pod "downwardapi-volume-92e31e60-0070-477b-bf10-d8351ebe6f8c" satisfied condition "success or failure"
Feb 18 17:38:21.895: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-92e31e60-0070-477b-bf10-d8351ebe6f8c container client-container: 
STEP: delete the pod
Feb 18 17:38:21.949: INFO: Waiting for pod downwardapi-volume-92e31e60-0070-477b-bf10-d8351ebe6f8c to disappear
Feb 18 17:38:21.972: INFO: Pod downwardapi-volume-92e31e60-0070-477b-bf10-d8351ebe6f8c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:38:21.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-655" for this suite.

• [SLOW TEST:8.495 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":257,"skipped":4238,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:38:22.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-81c138f6-9185-45eb-a444-dbcbc89f64f3
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:38:32.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9894" for this suite.

• [SLOW TEST:10.153 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":258,"skipped":4243,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:38:32.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 18 17:38:32.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8958'
Feb 18 17:38:34.552: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 18 17:38:34.552: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795
Feb 18 17:38:34.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-8958'
Feb 18 17:38:34.812: INFO: stderr: ""
Feb 18 17:38:34.812: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:38:34.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8958" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":280,"completed":259,"skipped":4249,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:38:34.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9373
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Feb 18 17:38:35.053: INFO: Found 0 stateful pods, waiting for 3
Feb 18 17:38:45.179: INFO: Found 1 stateful pods, waiting for 3
Feb 18 17:38:55.061: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 17:38:55.061: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 17:38:55.061: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 18 17:39:05.061: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 17:39:05.061: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 17:39:05.061: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 17:39:05.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9373 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 18 17:39:05.567: INFO: stderr: "I0218 17:39:05.297276    4411 log.go:172] (0xc00098cf20) (0xc0005a5f40) Create stream\nI0218 17:39:05.297370    4411 log.go:172] (0xc00098cf20) (0xc0005a5f40) Stream added, broadcasting: 1\nI0218 17:39:05.318643    4411 log.go:172] (0xc00098cf20) Reply frame received for 1\nI0218 17:39:05.318679    4411 log.go:172] (0xc00098cf20) (0xc000b2a0a0) Create stream\nI0218 17:39:05.318714    4411 log.go:172] (0xc00098cf20) (0xc000b2a0a0) Stream added, broadcasting: 3\nI0218 17:39:05.321451    4411 log.go:172] (0xc00098cf20) Reply frame received for 3\nI0218 17:39:05.321485    4411 log.go:172] (0xc00098cf20) (0xc000b38140) Create stream\nI0218 17:39:05.321529    4411 log.go:172] (0xc00098cf20) (0xc000b38140) Stream added, broadcasting: 5\nI0218 17:39:05.323312    4411 log.go:172] (0xc00098cf20) Reply frame received for 5\nI0218 17:39:05.419263    4411 log.go:172] (0xc00098cf20) Data frame received for 5\nI0218 17:39:05.419404    4411 log.go:172] (0xc000b38140) (5) Data frame handling\nI0218 17:39:05.419443    4411 log.go:172] (0xc000b38140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0218 17:39:05.461803    4411 log.go:172] (0xc00098cf20) Data frame received for 3\nI0218 17:39:05.461831    4411 log.go:172] (0xc000b2a0a0) (3) Data frame handling\nI0218 17:39:05.461846    4411 log.go:172] (0xc000b2a0a0) (3) Data frame sent\nI0218 17:39:05.557812    4411 log.go:172] (0xc00098cf20) Data frame received for 1\nI0218 17:39:05.557882    4411 log.go:172] (0xc00098cf20) (0xc000b38140) Stream removed, broadcasting: 5\nI0218 17:39:05.557919    4411 log.go:172] (0xc0005a5f40) (1) Data frame handling\nI0218 17:39:05.557929    4411 log.go:172] (0xc0005a5f40) (1) Data frame sent\nI0218 17:39:05.558037    4411 log.go:172] (0xc00098cf20) (0xc000b2a0a0) Stream removed, broadcasting: 3\nI0218 17:39:05.558075    4411 log.go:172] (0xc00098cf20) (0xc0005a5f40) Stream removed, broadcasting: 1\nI0218 17:39:05.558095    4411 log.go:172] (0xc00098cf20) Go away received\nI0218 17:39:05.558747    4411 log.go:172] (0xc00098cf20) (0xc0005a5f40) Stream removed, broadcasting: 1\nI0218 17:39:05.558760    4411 log.go:172] (0xc00098cf20) (0xc000b2a0a0) Stream removed, broadcasting: 3\nI0218 17:39:05.558765    4411 log.go:172] (0xc00098cf20) (0xc000b38140) Stream removed, broadcasting: 5\n"
Feb 18 17:39:05.567: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 18 17:39:05.567: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb 18 17:39:15.672: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 18 17:39:25.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9373 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 17:39:26.232: INFO: stderr: "I0218 17:39:26.049115    4430 log.go:172] (0xc0006f4840) (0xc000647d60) Create stream\nI0218 17:39:26.049285    4430 log.go:172] (0xc0006f4840) (0xc000647d60) Stream added, broadcasting: 1\nI0218 17:39:26.052842    4430 log.go:172] (0xc0006f4840) Reply frame received for 1\nI0218 17:39:26.052892    4430 log.go:172] (0xc0006f4840) (0xc000647e00) Create stream\nI0218 17:39:26.052906    4430 log.go:172] (0xc0006f4840) (0xc000647e00) Stream added, broadcasting: 3\nI0218 17:39:26.054037    4430 log.go:172] (0xc0006f4840) Reply frame received for 3\nI0218 17:39:26.054067    4430 log.go:172] (0xc0006f4840) (0xc00067a000) Create stream\nI0218 17:39:26.054081    4430 log.go:172] (0xc0006f4840) (0xc00067a000) Stream added, broadcasting: 5\nI0218 17:39:26.056777    4430 log.go:172] (0xc0006f4840) Reply frame received for 5\nI0218 17:39:26.148933    4430 log.go:172] (0xc0006f4840) Data frame received for 3\nI0218 17:39:26.149007    4430 log.go:172] (0xc000647e00) (3) Data frame handling\nI0218 17:39:26.149059    4430 log.go:172] (0xc0006f4840) Data frame received for 5\nI0218 17:39:26.149090    4430 log.go:172] (0xc00067a000) (5) Data frame handling\nI0218 17:39:26.149100    4430 log.go:172] (0xc00067a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 17:39:26.149121    4430 log.go:172] (0xc000647e00) (3) Data frame sent\nI0218 17:39:26.219371    4430 log.go:172] (0xc0006f4840) Data frame received for 1\nI0218 17:39:26.219415    4430 log.go:172] (0xc000647d60) (1) Data frame handling\nI0218 17:39:26.219438    4430 log.go:172] (0xc000647d60) (1) Data frame sent\nI0218 17:39:26.219485    4430 log.go:172] (0xc0006f4840) (0xc000647d60) Stream removed, broadcasting: 1\nI0218 17:39:26.219607    4430 log.go:172] (0xc0006f4840) (0xc00067a000) Stream removed, broadcasting: 5\nI0218 17:39:26.219780    4430 log.go:172] (0xc0006f4840) (0xc000647e00) Stream removed, broadcasting: 3\nI0218 17:39:26.219997    4430 log.go:172] (0xc0006f4840) Go away received\nI0218 17:39:26.221086    4430 log.go:172] (0xc0006f4840) (0xc000647d60) Stream removed, broadcasting: 1\nI0218 17:39:26.221209    4430 log.go:172] (0xc0006f4840) (0xc000647e00) Stream removed, broadcasting: 3\nI0218 17:39:26.221238    4430 log.go:172] (0xc0006f4840) (0xc00067a000) Stream removed, broadcasting: 5\n"
Feb 18 17:39:26.232: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 18 17:39:26.233: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 18 17:39:36.294: INFO: Waiting for StatefulSet statefulset-9373/ss2 to complete update
Feb 18 17:39:36.294: INFO: Waiting for Pod statefulset-9373/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 17:39:36.294: INFO: Waiting for Pod statefulset-9373/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 17:39:36.294: INFO: Waiting for Pod statefulset-9373/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 17:39:46.318: INFO: Waiting for StatefulSet statefulset-9373/ss2 to complete update
Feb 18 17:39:46.319: INFO: Waiting for Pod statefulset-9373/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 17:39:46.319: INFO: Waiting for Pod statefulset-9373/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 17:39:56.306: INFO: Waiting for StatefulSet statefulset-9373/ss2 to complete update
Feb 18 17:39:56.306: INFO: Waiting for Pod statefulset-9373/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 17:39:56.306: INFO: Waiting for Pod statefulset-9373/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 17:40:06.305: INFO: Waiting for StatefulSet statefulset-9373/ss2 to complete update
Feb 18 17:40:06.305: INFO: Waiting for Pod statefulset-9373/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 18 17:40:16.312: INFO: Waiting for StatefulSet statefulset-9373/ss2 to complete update
Feb 18 17:40:16.312: INFO: Waiting for Pod statefulset-9373/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Rolling back to a previous revision
Feb 18 17:40:26.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9373 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 18 17:40:26.805: INFO: stderr: "I0218 17:40:26.580966    4453 log.go:172] (0xc00058efd0) (0xc000703cc0) Create stream\nI0218 17:40:26.581275    4453 log.go:172] (0xc00058efd0) (0xc000703cc0) Stream added, broadcasting: 1\nI0218 17:40:26.587023    4453 log.go:172] (0xc00058efd0) Reply frame received for 1\nI0218 17:40:26.587146    4453 log.go:172] (0xc00058efd0) (0xc000a5e000) Create stream\nI0218 17:40:26.587196    4453 log.go:172] (0xc00058efd0) (0xc000a5e000) Stream added, broadcasting: 3\nI0218 17:40:26.588306    4453 log.go:172] (0xc00058efd0) Reply frame received for 3\nI0218 17:40:26.588352    4453 log.go:172] (0xc00058efd0) (0xc000a5e0a0) Create stream\nI0218 17:40:26.588370    4453 log.go:172] (0xc00058efd0) (0xc000a5e0a0) Stream added, broadcasting: 5\nI0218 17:40:26.589509    4453 log.go:172] (0xc00058efd0) Reply frame received for 5\nI0218 17:40:26.676865    4453 log.go:172] (0xc00058efd0) Data frame received for 5\nI0218 17:40:26.676915    4453 log.go:172] (0xc000a5e0a0) (5) Data frame handling\nI0218 17:40:26.676936    4453 log.go:172] (0xc000a5e0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/I0218 17:40:26.678811    4453 log.go:172] (0xc00058efd0) Data frame received for 5\nI0218 17:40:26.678829    4453 log.go:172] (0xc000a5e0a0) (5) Data frame handling\nI0218 17:40:26.678843    4453 log.go:172] (0xc000a5e0a0) (5) Data frame sent\n\nI0218 17:40:26.706435    4453 log.go:172] (0xc00058efd0) Data frame received for 3\nI0218 17:40:26.706451    4453 log.go:172] (0xc000a5e000) (3) Data frame handling\nI0218 17:40:26.706470    4453 log.go:172] (0xc000a5e000) (3) Data frame sent\nI0218 17:40:26.794853    4453 log.go:172] (0xc00058efd0) (0xc000a5e000) Stream removed, broadcasting: 3\nI0218 17:40:26.795173    4453 log.go:172] (0xc00058efd0) Data frame received for 1\nI0218 17:40:26.795196    4453 log.go:172] (0xc000703cc0) (1) Data frame handling\nI0218 17:40:26.795215    4453 log.go:172] (0xc000703cc0) (1) Data frame sent\nI0218 17:40:26.795231    4453 log.go:172] (0xc00058efd0) (0xc000703cc0) Stream removed, broadcasting: 1\nI0218 17:40:26.795822    4453 log.go:172] (0xc00058efd0) (0xc000a5e0a0) Stream removed, broadcasting: 5\nI0218 17:40:26.795853    4453 log.go:172] (0xc00058efd0) (0xc000703cc0) Stream removed, broadcasting: 1\nI0218 17:40:26.795862    4453 log.go:172] (0xc00058efd0) (0xc000a5e000) Stream removed, broadcasting: 3\nI0218 17:40:26.795868    4453 log.go:172] (0xc00058efd0) (0xc000a5e0a0) Stream removed, broadcasting: 5\n"
Feb 18 17:40:26.806: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 18 17:40:26.806: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 18 17:40:26.866: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 18 17:40:36.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9373 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 18 17:40:37.317: INFO: stderr: "I0218 17:40:37.130774    4473 log.go:172] (0xc00058a790) (0xc0008c80a0) Create stream\nI0218 17:40:37.130956    4473 log.go:172] (0xc00058a790) (0xc0008c80a0) Stream added, broadcasting: 1\nI0218 17:40:37.133687    4473 log.go:172] (0xc00058a790) Reply frame received for 1\nI0218 17:40:37.133760    4473 log.go:172] (0xc00058a790) (0xc000553400) Create stream\nI0218 17:40:37.133775    4473 log.go:172] (0xc00058a790) (0xc000553400) Stream added, broadcasting: 3\nI0218 17:40:37.134739    4473 log.go:172] (0xc00058a790) Reply frame received for 3\nI0218 17:40:37.134771    4473 log.go:172] (0xc00058a790) (0xc0008c8140) Create stream\nI0218 17:40:37.134778    4473 log.go:172] (0xc00058a790) (0xc0008c8140) Stream added, broadcasting: 5\nI0218 17:40:37.135918    4473 log.go:172] (0xc00058a790) Reply frame received for 5\nI0218 17:40:37.226896    4473 log.go:172] (0xc00058a790) Data frame received for 3\nI0218 17:40:37.226958    4473 log.go:172] (0xc000553400) (3) Data frame handling\nI0218 17:40:37.226979    4473 log.go:172] (0xc000553400) (3) Data frame sent\nI0218 17:40:37.227529    4473 log.go:172] (0xc00058a790) Data frame received for 5\nI0218 17:40:37.227560    4473 log.go:172] (0xc0008c8140) (5) Data frame handling\nI0218 17:40:37.227602    4473 log.go:172] (0xc0008c8140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0218 17:40:37.304450    4473 log.go:172] (0xc00058a790) Data frame received for 1\nI0218 17:40:37.304568    4473 log.go:172] (0xc00058a790) (0xc000553400) Stream removed, broadcasting: 3\nI0218 17:40:37.304622    4473 log.go:172] (0xc0008c80a0) (1) Data frame handling\nI0218 17:40:37.304648    4473 log.go:172] (0xc0008c80a0) (1) Data frame sent\nI0218 17:40:37.304704    4473 log.go:172] (0xc00058a790) (0xc0008c8140) Stream removed, broadcasting: 5\nI0218 17:40:37.304774    4473 log.go:172] (0xc00058a790) (0xc0008c80a0) Stream removed, broadcasting: 1\nI0218 17:40:37.304812    4473 log.go:172] (0xc00058a790) Go away received\nI0218 17:40:37.305681    4473 log.go:172] (0xc00058a790) (0xc0008c80a0) Stream removed, broadcasting: 1\nI0218 17:40:37.305697    4473 log.go:172] (0xc00058a790) (0xc000553400) Stream removed, broadcasting: 3\nI0218 17:40:37.305706    4473 log.go:172] (0xc00058a790) (0xc0008c8140) Stream removed, broadcasting: 5\n"
Feb 18 17:40:37.317: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 18 17:40:37.317: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 18 17:40:47.355: INFO: Waiting for StatefulSet statefulset-9373/ss2 to complete update
Feb 18 17:40:47.355: INFO: Waiting for Pod statefulset-9373/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 18 17:40:47.355: INFO: Waiting for Pod statefulset-9373/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 18 17:40:47.355: INFO: Waiting for Pod statefulset-9373/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 18 17:40:57.372: INFO: Waiting for StatefulSet statefulset-9373/ss2 to complete update
Feb 18 17:40:57.373: INFO: Waiting for Pod statefulset-9373/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 18 17:40:57.373: INFO: Waiting for Pod statefulset-9373/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 18 17:41:07.373: INFO: Waiting for StatefulSet statefulset-9373/ss2 to complete update
Feb 18 17:41:07.374: INFO: Waiting for Pod statefulset-9373/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 18 17:41:17.371: INFO: Waiting for StatefulSet statefulset-9373/ss2 to complete update
Feb 18 17:41:17.371: INFO: Waiting for Pod statefulset-9373/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 18 17:41:27.369: INFO: Waiting for StatefulSet statefulset-9373/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 18 17:41:37.368: INFO: Deleting all statefulset in ns statefulset-9373
Feb 18 17:41:37.381: INFO: Scaling statefulset ss2 to 0
Feb 18 17:42:17.414: INFO: Waiting for statefulset status.replicas updated to 0
Feb 18 17:42:17.419: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:42:17.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9373" for this suite.

• [SLOW TEST:222.685 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":280,"completed":260,"skipped":4250,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:42:17.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 18 17:42:27.659: INFO: Successfully updated pod "pod-update-755483e5-91dd-4aca-b4b9-6d5328559228"
STEP: verifying the updated pod is in kubernetes
Feb 18 17:42:27.681: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:42:27.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6722" for this suite.

• [SLOW TEST:10.178 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":280,"completed":261,"skipped":4263,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:42:27.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-f09dd8f7-f6b8-4424-9c77-06ae33899094
STEP: Creating a pod to test consume secrets
Feb 18 17:42:27.802: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2e7aeaad-4338-41e1-aa16-f0559a38578d" in namespace "projected-5018" to be "success or failure"
Feb 18 17:42:27.808: INFO: Pod "pod-projected-secrets-2e7aeaad-4338-41e1-aa16-f0559a38578d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.663078ms
Feb 18 17:42:29.816: INFO: Pod "pod-projected-secrets-2e7aeaad-4338-41e1-aa16-f0559a38578d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014352486s
Feb 18 17:42:31.832: INFO: Pod "pod-projected-secrets-2e7aeaad-4338-41e1-aa16-f0559a38578d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030306824s
Feb 18 17:42:33.843: INFO: Pod "pod-projected-secrets-2e7aeaad-4338-41e1-aa16-f0559a38578d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041223469s
Feb 18 17:42:35.855: INFO: Pod "pod-projected-secrets-2e7aeaad-4338-41e1-aa16-f0559a38578d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053411901s
Feb 18 17:42:37.873: INFO: Pod "pod-projected-secrets-2e7aeaad-4338-41e1-aa16-f0559a38578d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071126266s
STEP: Saw pod success
Feb 18 17:42:37.873: INFO: Pod "pod-projected-secrets-2e7aeaad-4338-41e1-aa16-f0559a38578d" satisfied condition "success or failure"
Feb 18 17:42:37.888: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-2e7aeaad-4338-41e1-aa16-f0559a38578d container projected-secret-volume-test: 
STEP: delete the pod
Feb 18 17:42:38.161: INFO: Waiting for pod pod-projected-secrets-2e7aeaad-4338-41e1-aa16-f0559a38578d to disappear
Feb 18 17:42:38.174: INFO: Pod pod-projected-secrets-2e7aeaad-4338-41e1-aa16-f0559a38578d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:42:38.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5018" for this suite.

• [SLOW TEST:10.509 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":262,"skipped":4266,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:42:38.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: set up a multi version CRD
Feb 18 17:42:38.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:42:53.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1320" for this suite.

• [SLOW TEST:15.443 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":280,"completed":263,"skipped":4326,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:42:53.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 18 17:42:53.783: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b901e1f5-7fe8-4866-93e5-80557b62077b" in namespace "projected-6592" to be "success or failure"
Feb 18 17:42:53.807: INFO: Pod "downwardapi-volume-b901e1f5-7fe8-4866-93e5-80557b62077b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.733001ms
Feb 18 17:42:55.822: INFO: Pod "downwardapi-volume-b901e1f5-7fe8-4866-93e5-80557b62077b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038639824s
Feb 18 17:42:57.831: INFO: Pod "downwardapi-volume-b901e1f5-7fe8-4866-93e5-80557b62077b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047345367s
Feb 18 17:42:59.840: INFO: Pod "downwardapi-volume-b901e1f5-7fe8-4866-93e5-80557b62077b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056526744s
Feb 18 17:43:01.846: INFO: Pod "downwardapi-volume-b901e1f5-7fe8-4866-93e5-80557b62077b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061798573s
STEP: Saw pod success
Feb 18 17:43:01.846: INFO: Pod "downwardapi-volume-b901e1f5-7fe8-4866-93e5-80557b62077b" satisfied condition "success or failure"
Feb 18 17:43:01.850: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b901e1f5-7fe8-4866-93e5-80557b62077b container client-container: 
STEP: delete the pod
Feb 18 17:43:01.894: INFO: Waiting for pod downwardapi-volume-b901e1f5-7fe8-4866-93e5-80557b62077b to disappear
Feb 18 17:43:01.916: INFO: Pod downwardapi-volume-b901e1f5-7fe8-4866-93e5-80557b62077b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:43:01.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6592" for this suite.

• [SLOW TEST:8.277 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":280,"completed":264,"skipped":4347,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:43:01.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1168.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1168.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1168.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1168.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1168.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1168.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 17:43:12.330: INFO: DNS probes using dns-1168/dns-test-6eb29945-14d1-4c9f-bc02-292b5cd93f94 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:43:12.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1168" for this suite.

• [SLOW TEST:10.499 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":280,"completed":265,"skipped":4358,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:43:12.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating replication controller my-hostname-basic-37e5eb22-f211-4e0a-9433-b53b671bac9f
Feb 18 17:43:12.516: INFO: Pod name my-hostname-basic-37e5eb22-f211-4e0a-9433-b53b671bac9f: Found 0 pods out of 1
Feb 18 17:43:17.529: INFO: Pod name my-hostname-basic-37e5eb22-f211-4e0a-9433-b53b671bac9f: Found 1 pods out of 1
Feb 18 17:43:17.529: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-37e5eb22-f211-4e0a-9433-b53b671bac9f" are running
Feb 18 17:43:21.548: INFO: Pod "my-hostname-basic-37e5eb22-f211-4e0a-9433-b53b671bac9f-k5rnt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 17:43:12 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 17:43:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-37e5eb22-f211-4e0a-9433-b53b671bac9f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 17:43:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-37e5eb22-f211-4e0a-9433-b53b671bac9f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-18 17:43:12 +0000 UTC Reason: Message:}])
Feb 18 17:43:21.548: INFO: Trying to dial the pod
Feb 18 17:43:26.578: INFO: Controller my-hostname-basic-37e5eb22-f211-4e0a-9433-b53b671bac9f: Got expected result from replica 1 [my-hostname-basic-37e5eb22-f211-4e0a-9433-b53b671bac9f-k5rnt]: "my-hostname-basic-37e5eb22-f211-4e0a-9433-b53b671bac9f-k5rnt", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:43:26.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6131" for this suite.

• [SLOW TEST:14.167 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":266,"skipped":4371,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:43:26.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 18 17:43:26.716: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:43:27.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5140" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":280,"completed":267,"skipped":4376,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:43:27.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:43:33.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9756" for this suite.

• [SLOW TEST:5.819 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":280,"completed":268,"skipped":4403,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:43:33.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:43:50.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8087" for this suite.

• [SLOW TEST:17.244 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":280,"completed":269,"skipped":4406,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:43:50.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-bfecaf8d-818c-401a-8081-5f058f1d716f
STEP: Creating a pod to test consume secrets
Feb 18 17:43:50.878: INFO: Waiting up to 5m0s for pod "pod-secrets-090fa559-e5ce-4050-b4b8-ce6a1bec322b" in namespace "secrets-8025" to be "success or failure"
Feb 18 17:43:50.893: INFO: Pod "pod-secrets-090fa559-e5ce-4050-b4b8-ce6a1bec322b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.41633ms
Feb 18 17:43:52.903: INFO: Pod "pod-secrets-090fa559-e5ce-4050-b4b8-ce6a1bec322b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024899546s
Feb 18 17:43:54.909: INFO: Pod "pod-secrets-090fa559-e5ce-4050-b4b8-ce6a1bec322b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031053395s
Feb 18 17:43:56.915: INFO: Pod "pod-secrets-090fa559-e5ce-4050-b4b8-ce6a1bec322b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036801667s
Feb 18 17:43:58.924: INFO: Pod "pod-secrets-090fa559-e5ce-4050-b4b8-ce6a1bec322b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045880208s
Feb 18 17:44:00.933: INFO: Pod "pod-secrets-090fa559-e5ce-4050-b4b8-ce6a1bec322b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054928883s
STEP: Saw pod success
Feb 18 17:44:00.933: INFO: Pod "pod-secrets-090fa559-e5ce-4050-b4b8-ce6a1bec322b" satisfied condition "success or failure"
Feb 18 17:44:00.938: INFO: Trying to get logs from node jerma-node pod pod-secrets-090fa559-e5ce-4050-b4b8-ce6a1bec322b container secret-volume-test: 
STEP: delete the pod
Feb 18 17:44:00.977: INFO: Waiting for pod pod-secrets-090fa559-e5ce-4050-b4b8-ce6a1bec322b to disappear
Feb 18 17:44:01.014: INFO: Pod pod-secrets-090fa559-e5ce-4050-b4b8-ce6a1bec322b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:44:01.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8025" for this suite.

• [SLOW TEST:10.290 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":270,"skipped":4406,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:44:01.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-d475e67f-e6a0-4d9a-9e12-34dca2366fbf
STEP: Creating a pod to test consume secrets
Feb 18 17:44:01.121: INFO: Waiting up to 5m0s for pod "pod-secrets-d3dd7b84-c594-4fb6-ba6f-ee7f12f73b0c" in namespace "secrets-5949" to be "success or failure"
Feb 18 17:44:01.125: INFO: Pod "pod-secrets-d3dd7b84-c594-4fb6-ba6f-ee7f12f73b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.379221ms
Feb 18 17:44:03.132: INFO: Pod "pod-secrets-d3dd7b84-c594-4fb6-ba6f-ee7f12f73b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010334999s
Feb 18 17:44:05.142: INFO: Pod "pod-secrets-d3dd7b84-c594-4fb6-ba6f-ee7f12f73b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020426779s
Feb 18 17:44:07.150: INFO: Pod "pod-secrets-d3dd7b84-c594-4fb6-ba6f-ee7f12f73b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028376917s
Feb 18 17:44:09.158: INFO: Pod "pod-secrets-d3dd7b84-c594-4fb6-ba6f-ee7f12f73b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03619188s
Feb 18 17:44:11.165: INFO: Pod "pod-secrets-d3dd7b84-c594-4fb6-ba6f-ee7f12f73b0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.044053253s
STEP: Saw pod success
Feb 18 17:44:11.166: INFO: Pod "pod-secrets-d3dd7b84-c594-4fb6-ba6f-ee7f12f73b0c" satisfied condition "success or failure"
Feb 18 17:44:11.170: INFO: Trying to get logs from node jerma-node pod pod-secrets-d3dd7b84-c594-4fb6-ba6f-ee7f12f73b0c container secret-volume-test: 
STEP: delete the pod
Feb 18 17:44:11.233: INFO: Waiting for pod pod-secrets-d3dd7b84-c594-4fb6-ba6f-ee7f12f73b0c to disappear
Feb 18 17:44:11.245: INFO: Pod pod-secrets-d3dd7b84-c594-4fb6-ba6f-ee7f12f73b0c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:44:11.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5949" for this suite.

• [SLOW TEST:10.237 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":271,"skipped":4407,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:44:11.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 18 17:44:11.355: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 18 17:44:14.465: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:44:14.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1555" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":280,"completed":272,"skipped":4409,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:44:14.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-58dbc3af-6a21-428a-b45d-f102afa2469f
STEP: Creating a pod to test consume configMaps
Feb 18 17:44:15.717: INFO: Waiting up to 5m0s for pod "pod-configmaps-a1ef7ae3-dc63-4a2a-bf7e-abe0efdc76fc" in namespace "configmap-9697" to be "success or failure"
Feb 18 17:44:15.729: INFO: Pod "pod-configmaps-a1ef7ae3-dc63-4a2a-bf7e-abe0efdc76fc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.58179ms
Feb 18 17:44:18.972: INFO: Pod "pod-configmaps-a1ef7ae3-dc63-4a2a-bf7e-abe0efdc76fc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.254949735s
Feb 18 17:44:21.048: INFO: Pod "pod-configmaps-a1ef7ae3-dc63-4a2a-bf7e-abe0efdc76fc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.330789254s
Feb 18 17:44:23.054: INFO: Pod "pod-configmaps-a1ef7ae3-dc63-4a2a-bf7e-abe0efdc76fc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.336290695s
Feb 18 17:44:25.064: INFO: Pod "pod-configmaps-a1ef7ae3-dc63-4a2a-bf7e-abe0efdc76fc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.346578431s
Feb 18 17:44:27.159: INFO: Pod "pod-configmaps-a1ef7ae3-dc63-4a2a-bf7e-abe0efdc76fc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.441862358s
Feb 18 17:44:29.166: INFO: Pod "pod-configmaps-a1ef7ae3-dc63-4a2a-bf7e-abe0efdc76fc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.448931767s
Feb 18 17:44:31.173: INFO: Pod "pod-configmaps-a1ef7ae3-dc63-4a2a-bf7e-abe0efdc76fc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.455443341s
Feb 18 17:44:35.159: INFO: Pod "pod-configmaps-a1ef7ae3-dc63-4a2a-bf7e-abe0efdc76fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.441675442s
STEP: Saw pod success
Feb 18 17:44:35.159: INFO: Pod "pod-configmaps-a1ef7ae3-dc63-4a2a-bf7e-abe0efdc76fc" satisfied condition "success or failure"
Feb 18 17:44:35.196: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a1ef7ae3-dc63-4a2a-bf7e-abe0efdc76fc container configmap-volume-test: 
STEP: delete the pod
Feb 18 17:44:35.365: INFO: Waiting for pod pod-configmaps-a1ef7ae3-dc63-4a2a-bf7e-abe0efdc76fc to disappear
Feb 18 17:44:35.392: INFO: Pod pod-configmaps-a1ef7ae3-dc63-4a2a-bf7e-abe0efdc76fc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:44:35.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9697" for this suite.

• [SLOW TEST:20.839 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":273,"skipped":4472,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:44:35.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 18 17:44:35.603: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19a13141-1e86-4641-867a-787f7e0d5b38" in namespace "projected-1303" to be "success or failure"
Feb 18 17:44:35.684: INFO: Pod "downwardapi-volume-19a13141-1e86-4641-867a-787f7e0d5b38": Phase="Pending", Reason="", readiness=false. Elapsed: 80.189216ms
Feb 18 17:44:37.692: INFO: Pod "downwardapi-volume-19a13141-1e86-4641-867a-787f7e0d5b38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088347316s
Feb 18 17:44:39.702: INFO: Pod "downwardapi-volume-19a13141-1e86-4641-867a-787f7e0d5b38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098950643s
Feb 18 17:44:41.711: INFO: Pod "downwardapi-volume-19a13141-1e86-4641-867a-787f7e0d5b38": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107800327s
Feb 18 17:44:43.716: INFO: Pod "downwardapi-volume-19a13141-1e86-4641-867a-787f7e0d5b38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112687209s
STEP: Saw pod success
Feb 18 17:44:43.716: INFO: Pod "downwardapi-volume-19a13141-1e86-4641-867a-787f7e0d5b38" satisfied condition "success or failure"
Feb 18 17:44:43.719: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-19a13141-1e86-4641-867a-787f7e0d5b38 container client-container: 
STEP: delete the pod
Feb 18 17:44:43.744: INFO: Waiting for pod downwardapi-volume-19a13141-1e86-4641-867a-787f7e0d5b38 to disappear
Feb 18 17:44:43.747: INFO: Pod downwardapi-volume-19a13141-1e86-4641-867a-787f7e0d5b38 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:44:43.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1303" for this suite.

• [SLOW TEST:8.329 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":274,"skipped":4475,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:44:43.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1735
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 18 17:44:43.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-4573'
Feb 18 17:44:44.113: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 18 17:44:44.114: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1740
Feb 18 17:44:46.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4573'
Feb 18 17:44:46.482: INFO: stderr: ""
Feb 18 17:44:46.482: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:44:46.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4573" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":280,"completed":275,"skipped":4477,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:44:46.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override command
Feb 18 17:44:46.757: INFO: Waiting up to 5m0s for pod "client-containers-1bb34d70-52a5-4fe7-9f55-3003feed0359" in namespace "containers-4316" to be "success or failure"
Feb 18 17:44:46.787: INFO: Pod "client-containers-1bb34d70-52a5-4fe7-9f55-3003feed0359": Phase="Pending", Reason="", readiness=false. Elapsed: 29.360692ms
Feb 18 17:44:48.794: INFO: Pod "client-containers-1bb34d70-52a5-4fe7-9f55-3003feed0359": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036165482s
Feb 18 17:44:50.800: INFO: Pod "client-containers-1bb34d70-52a5-4fe7-9f55-3003feed0359": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042725321s
Feb 18 17:44:52.824: INFO: Pod "client-containers-1bb34d70-52a5-4fe7-9f55-3003feed0359": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066445356s
Feb 18 17:44:54.832: INFO: Pod "client-containers-1bb34d70-52a5-4fe7-9f55-3003feed0359": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074758656s
Feb 18 17:44:56.839: INFO: Pod "client-containers-1bb34d70-52a5-4fe7-9f55-3003feed0359": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081353749s
STEP: Saw pod success
Feb 18 17:44:56.839: INFO: Pod "client-containers-1bb34d70-52a5-4fe7-9f55-3003feed0359" satisfied condition "success or failure"
Feb 18 17:44:56.843: INFO: Trying to get logs from node jerma-node pod client-containers-1bb34d70-52a5-4fe7-9f55-3003feed0359 container test-container: 
STEP: delete the pod
Feb 18 17:44:57.364: INFO: Waiting for pod client-containers-1bb34d70-52a5-4fe7-9f55-3003feed0359 to disappear
Feb 18 17:44:57.375: INFO: Pod client-containers-1bb34d70-52a5-4fe7-9f55-3003feed0359 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:44:57.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4316" for this suite.

• [SLOW TEST:10.811 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":280,"completed":276,"skipped":4505,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:44:57.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-5335
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 18 17:44:57.497: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 18 17:44:57.598: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 18 17:44:59.605: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 18 17:45:01.607: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 18 17:45:04.167: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 18 17:45:05.769: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 18 17:45:07.608: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 18 17:45:09.610: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 18 17:45:11.608: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 18 17:45:13.606: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 18 17:45:15.613: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 18 17:45:17.605: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 18 17:45:19.605: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 18 17:45:21.605: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 18 17:45:21.614: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 18 17:45:29.699: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5335 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 17:45:29.699: INFO: >>> kubeConfig: /root/.kube/config
I0218 17:45:29.758855       9 log.go:172] (0xc0018d6160) (0xc0015983c0) Create stream
I0218 17:45:29.758948       9 log.go:172] (0xc0018d6160) (0xc0015983c0) Stream added, broadcasting: 1
I0218 17:45:29.764102       9 log.go:172] (0xc0018d6160) Reply frame received for 1
I0218 17:45:29.764143       9 log.go:172] (0xc0018d6160) (0xc001598460) Create stream
I0218 17:45:29.764160       9 log.go:172] (0xc0018d6160) (0xc001598460) Stream added, broadcasting: 3
I0218 17:45:29.766404       9 log.go:172] (0xc0018d6160) Reply frame received for 3
I0218 17:45:29.766443       9 log.go:172] (0xc0018d6160) (0xc00191c460) Create stream
I0218 17:45:29.766459       9 log.go:172] (0xc0018d6160) (0xc00191c460) Stream added, broadcasting: 5
I0218 17:45:29.768283       9 log.go:172] (0xc0018d6160) Reply frame received for 5
I0218 17:45:30.902085       9 log.go:172] (0xc0018d6160) Data frame received for 3
I0218 17:45:30.902188       9 log.go:172] (0xc001598460) (3) Data frame handling
I0218 17:45:30.902232       9 log.go:172] (0xc001598460) (3) Data frame sent
I0218 17:45:31.011178       9 log.go:172] (0xc0018d6160) Data frame received for 1
I0218 17:45:31.011294       9 log.go:172] (0xc0018d6160) (0xc001598460) Stream removed, broadcasting: 3
I0218 17:45:31.011378       9 log.go:172] (0xc0015983c0) (1) Data frame handling
I0218 17:45:31.011407       9 log.go:172] (0xc0015983c0) (1) Data frame sent
I0218 17:45:31.011435       9 log.go:172] (0xc0018d6160) (0xc00191c460) Stream removed, broadcasting: 5
I0218 17:45:31.011456       9 log.go:172] (0xc0018d6160) (0xc0015983c0) Stream removed, broadcasting: 1
I0218 17:45:31.011467       9 log.go:172] (0xc0018d6160) Go away received
I0218 17:45:31.011652       9 log.go:172] (0xc0018d6160) (0xc0015983c0) Stream removed, broadcasting: 1
I0218 17:45:31.011666       9 log.go:172] (0xc0018d6160) (0xc001598460) Stream removed, broadcasting: 3
I0218 17:45:31.011673       9 log.go:172] (0xc0018d6160) (0xc00191c460) Stream removed, broadcasting: 5
Feb 18 17:45:31.011: INFO: Found all expected endpoints: [netserver-0]
Feb 18 17:45:31.019: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5335 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 17:45:31.019: INFO: >>> kubeConfig: /root/.kube/config
I0218 17:45:31.081054       9 log.go:172] (0xc001f4c160) (0xc00191c500) Create stream
I0218 17:45:31.081214       9 log.go:172] (0xc001f4c160) (0xc00191c500) Stream added, broadcasting: 1
I0218 17:45:31.087361       9 log.go:172] (0xc001f4c160) Reply frame received for 1
I0218 17:45:31.087411       9 log.go:172] (0xc001f4c160) (0xc00170e460) Create stream
I0218 17:45:31.087453       9 log.go:172] (0xc001f4c160) (0xc00170e460) Stream added, broadcasting: 3
I0218 17:45:31.089504       9 log.go:172] (0xc001f4c160) Reply frame received for 3
I0218 17:45:31.089564       9 log.go:172] (0xc001f4c160) (0xc00170e640) Create stream
I0218 17:45:31.089579       9 log.go:172] (0xc001f4c160) (0xc00170e640) Stream added, broadcasting: 5
I0218 17:45:31.091271       9 log.go:172] (0xc001f4c160) Reply frame received for 5
I0218 17:45:32.157373       9 log.go:172] (0xc001f4c160) Data frame received for 3
I0218 17:45:32.157557       9 log.go:172] (0xc00170e460) (3) Data frame handling
I0218 17:45:32.157574       9 log.go:172] (0xc00170e460) (3) Data frame sent
I0218 17:45:32.275655       9 log.go:172] (0xc001f4c160) Data frame received for 1
I0218 17:45:32.275818       9 log.go:172] (0xc00191c500) (1) Data frame handling
I0218 17:45:32.275867       9 log.go:172] (0xc00191c500) (1) Data frame sent
I0218 17:45:32.275890       9 log.go:172] (0xc001f4c160) (0xc00191c500) Stream removed, broadcasting: 1
I0218 17:45:32.276174       9 log.go:172] (0xc001f4c160) (0xc00170e460) Stream removed, broadcasting: 3
I0218 17:45:32.276447       9 log.go:172] (0xc001f4c160) (0xc00170e640) Stream removed, broadcasting: 5
I0218 17:45:32.276485       9 log.go:172] (0xc001f4c160) (0xc00191c500) Stream removed, broadcasting: 1
I0218 17:45:32.276493       9 log.go:172] (0xc001f4c160) (0xc00170e460) Stream removed, broadcasting: 3
I0218 17:45:32.276500       9 log.go:172] (0xc001f4c160) (0xc00170e640) Stream removed, broadcasting: 5
I0218 17:45:32.276508       9 log.go:172] (0xc001f4c160) Go away received
Feb 18 17:45:32.276: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:45:32.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5335" for this suite.

• [SLOW TEST:34.895 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":277,"skipped":4533,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:45:32.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-9197a5bb-f53c-46f0-a0c1-14056eca0869
STEP: Creating a pod to test consume configMaps
Feb 18 17:45:32.441: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d19f73f2-8f36-424e-a0cb-f05e975281aa" in namespace "projected-6394" to be "success or failure"
Feb 18 17:45:32.471: INFO: Pod "pod-projected-configmaps-d19f73f2-8f36-424e-a0cb-f05e975281aa": Phase="Pending", Reason="", readiness=false. Elapsed: 29.140804ms
Feb 18 17:45:34.480: INFO: Pod "pod-projected-configmaps-d19f73f2-8f36-424e-a0cb-f05e975281aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038603465s
Feb 18 17:45:37.004: INFO: Pod "pod-projected-configmaps-d19f73f2-8f36-424e-a0cb-f05e975281aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.562708242s
Feb 18 17:45:39.104: INFO: Pod "pod-projected-configmaps-d19f73f2-8f36-424e-a0cb-f05e975281aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.662377127s
Feb 18 17:45:41.172: INFO: Pod "pod-projected-configmaps-d19f73f2-8f36-424e-a0cb-f05e975281aa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.73051573s
Feb 18 17:45:43.179: INFO: Pod "pod-projected-configmaps-d19f73f2-8f36-424e-a0cb-f05e975281aa": Phase="Pending", Reason="", readiness=false. Elapsed: 10.737390495s
Feb 18 17:45:45.183: INFO: Pod "pod-projected-configmaps-d19f73f2-8f36-424e-a0cb-f05e975281aa": Phase="Pending", Reason="", readiness=false. Elapsed: 12.741742186s
Feb 18 17:45:47.191: INFO: Pod "pod-projected-configmaps-d19f73f2-8f36-424e-a0cb-f05e975281aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.74912353s
STEP: Saw pod success
Feb 18 17:45:47.191: INFO: Pod "pod-projected-configmaps-d19f73f2-8f36-424e-a0cb-f05e975281aa" satisfied condition "success or failure"
Feb 18 17:45:47.194: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-d19f73f2-8f36-424e-a0cb-f05e975281aa container projected-configmap-volume-test: 
STEP: delete the pod
Feb 18 17:45:47.227: INFO: Waiting for pod pod-projected-configmaps-d19f73f2-8f36-424e-a0cb-f05e975281aa to disappear
Feb 18 17:45:47.239: INFO: Pod pod-projected-configmaps-d19f73f2-8f36-424e-a0cb-f05e975281aa no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:45:47.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6394" for this suite.

• [SLOW TEST:14.955 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":278,"skipped":4542,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 18 17:45:47.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-5066a39a-145c-497d-b369-fb36e3e35bae
STEP: Creating a pod to test consume configMaps
Feb 18 17:45:47.836: INFO: Waiting up to 5m0s for pod "pod-configmaps-590be9d8-6387-4925-9959-d231153e34e0" in namespace "configmap-1703" to be "success or failure"
Feb 18 17:45:47.842: INFO: Pod "pod-configmaps-590be9d8-6387-4925-9959-d231153e34e0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.733189ms
Feb 18 17:45:49.860: INFO: Pod "pod-configmaps-590be9d8-6387-4925-9959-d231153e34e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023947039s
Feb 18 17:45:51.875: INFO: Pod "pod-configmaps-590be9d8-6387-4925-9959-d231153e34e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038899226s
Feb 18 17:45:53.893: INFO: Pod "pod-configmaps-590be9d8-6387-4925-9959-d231153e34e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056211497s
Feb 18 17:45:55.902: INFO: Pod "pod-configmaps-590be9d8-6387-4925-9959-d231153e34e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065998913s
STEP: Saw pod success
Feb 18 17:45:55.903: INFO: Pod "pod-configmaps-590be9d8-6387-4925-9959-d231153e34e0" satisfied condition "success or failure"
Feb 18 17:45:55.907: INFO: Trying to get logs from node jerma-node pod pod-configmaps-590be9d8-6387-4925-9959-d231153e34e0 container configmap-volume-test: 
STEP: delete the pod
Feb 18 17:45:56.208: INFO: Waiting for pod pod-configmaps-590be9d8-6387-4925-9959-d231153e34e0 to disappear
Feb 18 17:45:56.216: INFO: Pod pod-configmaps-590be9d8-6387-4925-9959-d231153e34e0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 18 17:45:56.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1703" for this suite.

• [SLOW TEST:8.983 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":279,"skipped":4543,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSFeb 18 17:45:56.233: INFO: Running AfterSuite actions on all nodes
Feb 18 17:45:56.233: INFO: Running AfterSuite actions on node 1
Feb 18 17:45:56.233: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":280,"completed":279,"skipped":4565,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339

Ran 280 of 4845 Specs in 6920.839 seconds
FAIL! -- 279 Passed | 1 Failed | 0 Pending | 4565 Skipped
--- FAIL: TestE2E (6920.94s)
FAIL