I0318 12:55:41.683223 6 e2e.go:243] Starting e2e run "45fa171b-fe9c-4d42-91a3-8e02975baf31" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584536140 - Will randomize all specs Will run 215 of 4412 specs Mar 18 12:55:41.864: INFO: >>> kubeConfig: /root/.kube/config Mar 18 12:55:41.868: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 18 12:55:41.900: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 18 12:55:41.928: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 18 12:55:41.928: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 18 12:55:41.928: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 18 12:55:41.950: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 18 12:55:41.950: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 18 12:55:41.950: INFO: e2e test version: v1.15.10 Mar 18 12:55:41.951: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:55:41.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Mar 18 12:55:41.984: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 18 12:55:41.990: INFO: Waiting up to 5m0s for pod "pod-7ccfa06d-4762-4970-bded-b092a35dc3ba" in namespace "emptydir-6690" to be "success or failure" Mar 18 12:55:42.031: INFO: Pod "pod-7ccfa06d-4762-4970-bded-b092a35dc3ba": Phase="Pending", Reason="", readiness=false. Elapsed: 40.867928ms Mar 18 12:55:44.034: INFO: Pod "pod-7ccfa06d-4762-4970-bded-b092a35dc3ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044686975s Mar 18 12:55:46.038: INFO: Pod "pod-7ccfa06d-4762-4970-bded-b092a35dc3ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048791658s STEP: Saw pod success Mar 18 12:55:46.039: INFO: Pod "pod-7ccfa06d-4762-4970-bded-b092a35dc3ba" satisfied condition "success or failure" Mar 18 12:55:46.041: INFO: Trying to get logs from node iruya-worker2 pod pod-7ccfa06d-4762-4970-bded-b092a35dc3ba container test-container: STEP: delete the pod Mar 18 12:55:46.063: INFO: Waiting for pod pod-7ccfa06d-4762-4970-bded-b092a35dc3ba to disappear Mar 18 12:55:46.068: INFO: Pod pod-7ccfa06d-4762-4970-bded-b092a35dc3ba no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:55:46.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6690" for this suite. Mar 18 12:55:52.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:55:52.165: INFO: namespace emptydir-6690 deletion completed in 6.093808796s • [SLOW TEST:10.213 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:55:52.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 12:55:52.230: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc8d7a11-c217-4b42-b4d7-e16bedd6e9a4" in namespace "downward-api-9450" to be "success or failure" Mar 18 12:55:52.239: INFO: Pod "downwardapi-volume-bc8d7a11-c217-4b42-b4d7-e16bedd6e9a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.819991ms Mar 18 12:55:54.245: INFO: Pod "downwardapi-volume-bc8d7a11-c217-4b42-b4d7-e16bedd6e9a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015024208s Mar 18 12:55:56.250: INFO: Pod "downwardapi-volume-bc8d7a11-c217-4b42-b4d7-e16bedd6e9a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019794903s STEP: Saw pod success Mar 18 12:55:56.250: INFO: Pod "downwardapi-volume-bc8d7a11-c217-4b42-b4d7-e16bedd6e9a4" satisfied condition "success or failure" Mar 18 12:55:56.254: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-bc8d7a11-c217-4b42-b4d7-e16bedd6e9a4 container client-container: STEP: delete the pod Mar 18 12:55:56.311: INFO: Waiting for pod downwardapi-volume-bc8d7a11-c217-4b42-b4d7-e16bedd6e9a4 to disappear Mar 18 12:55:56.322: INFO: Pod downwardapi-volume-bc8d7a11-c217-4b42-b4d7-e16bedd6e9a4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:55:56.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9450" for this suite. Mar 18 12:56:02.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:56:02.423: INFO: namespace downward-api-9450 deletion completed in 6.097286405s • [SLOW TEST:10.258 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:56:02.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 18 12:56:02.456: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 18 12:56:02.481: INFO: Waiting for terminating namespaces to be deleted... Mar 18 12:56:02.483: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 18 12:56:02.488: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 18 12:56:02.488: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 12:56:02.488: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 18 12:56:02.488: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 12:56:02.488: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 18 12:56:02.494: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 18 12:56:02.494: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 12:56:02.494: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 18 12:56:02.494: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 12:56:02.494: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 18 12:56:02.494: INFO: Container coredns ready: true, restart count 0 Mar 18 12:56:02.494: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 18 12:56:02.494: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-179f5cf8-270d-4832-8b8b-406b382349f3 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-179f5cf8-270d-4832-8b8b-406b382349f3 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-179f5cf8-270d-4832-8b8b-406b382349f3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:56:10.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5594" for this suite. Mar 18 12:56:18.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:56:18.712: INFO: namespace sched-pred-5594 deletion completed in 8.0835933s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:16.289 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:56:18.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Mar 18 12:56:18.794: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:56:18.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5748" for this suite. Mar 18 12:56:24.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:56:24.981: INFO: namespace kubectl-5748 deletion completed in 6.092761704s • [SLOW TEST:6.268 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:56:24.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2128, will wait for the garbage collector to delete the pods Mar 18 12:56:29.099: INFO: Deleting Job.batch foo took: 6.381044ms Mar 18 12:56:29.399: INFO: Terminating Job.batch foo pods took: 300.283904ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:57:11.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2128" for this suite. Mar 18 12:57:17.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:57:17.996: INFO: namespace job-2128 deletion completed in 6.089370491s • [SLOW TEST:53.015 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:57:17.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0318 12:57:19.135957 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 12:57:19.136: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:57:19.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6576" for this suite. Mar 18 12:57:25.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:57:25.240: INFO: namespace gc-6576 deletion completed in 6.101891442s • [SLOW TEST:7.244 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:57:25.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 12:57:25.365: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 18 12:57:30.370: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 18 12:57:30.370: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 18 12:57:30.433: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2272,SelfLink:/apis/apps/v1/namespaces/deployment-2272/deployments/test-cleanup-deployment,UID:393679b1-31b1-46fe-abfa-0f99a63f9264,ResourceVersion:513026,Generation:1,CreationTimestamp:2020-03-18 12:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Mar 18 12:57:30.439: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2272,SelfLink:/apis/apps/v1/namespaces/deployment-2272/replicasets/test-cleanup-deployment-55bbcbc84c,UID:da217030-2efe-4b90-836f-e703c6c9a00b,ResourceVersion:513028,Generation:1,CreationTimestamp:2020-03-18 12:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 393679b1-31b1-46fe-abfa-0f99a63f9264 0xc00170f6f7 0xc00170f6f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 18 12:57:30.439: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 18 12:57:30.439: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2272,SelfLink:/apis/apps/v1/namespaces/deployment-2272/replicasets/test-cleanup-controller,UID:3643d7c1-4a18-41b4-88d1-2882a8c0fefc,ResourceVersion:513027,Generation:1,CreationTimestamp:2020-03-18 12:57:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 393679b1-31b1-46fe-abfa-0f99a63f9264 0xc00170f627 0xc00170f628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 18 12:57:30.494: INFO: Pod "test-cleanup-controller-6z7ns" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-6z7ns,GenerateName:test-cleanup-controller-,Namespace:deployment-2272,SelfLink:/api/v1/namespaces/deployment-2272/pods/test-cleanup-controller-6z7ns,UID:df867149-57b8-459a-ad20-64c07802c767,ResourceVersion:513018,Generation:0,CreationTimestamp:2020-03-18 12:57:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 3643d7c1-4a18-41b4-88d1-2882a8c0fefc 0xc00170fff7 0xc00170fff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6h2wf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6h2wf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6h2wf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001982070} {node.kubernetes.io/unreachable Exists NoExecute 0xc001982090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:57:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:57:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:57:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:57:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.240,StartTime:2020-03-18 12:57:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 12:57:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8b8cdd74cf7da4860f5d2f2bf69cc2e6c36cd651bcb8b68e4e7d7c0b6c587f8b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 12:57:30.494: INFO: Pod "test-cleanup-deployment-55bbcbc84c-jp5v9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-jp5v9,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2272,SelfLink:/api/v1/namespaces/deployment-2272/pods/test-cleanup-deployment-55bbcbc84c-jp5v9,UID:e6e791b9-0595-4e64-b3e2-acc92dc8dbb3,ResourceVersion:513034,Generation:0,CreationTimestamp:2020-03-18 12:57:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c da217030-2efe-4b90-836f-e703c6c9a00b 0xc001982167 0xc001982168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6h2wf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6h2wf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-6h2wf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019821e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001982200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 12:57:30 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:57:30.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2272" for this suite. Mar 18 12:57:36.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:57:36.655: INFO: namespace deployment-2272 deletion completed in 6.08317785s • [SLOW TEST:11.415 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:57:36.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 18 12:57:40.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-6c5d145b-27cb-4a89-875b-88a6977f07d9 -c busybox-main-container --namespace=emptydir-1467 -- cat /usr/share/volumeshare/shareddata.txt' Mar 18 12:57:43.264: INFO: stderr: "I0318 12:57:43.165004 57 log.go:172] (0xc000b5e420) (0xc000472b40) Create stream\nI0318 12:57:43.165061 57 log.go:172] (0xc000b5e420) (0xc000472b40) Stream added, broadcasting: 1\nI0318 12:57:43.168124 57 log.go:172] (0xc000b5e420) Reply frame received for 1\nI0318 12:57:43.168168 57 log.go:172] (0xc000b5e420) (0xc000768000) Create stream\nI0318 12:57:43.168190 57 log.go:172] (0xc000b5e420) (0xc000768000) Stream added, broadcasting: 3\nI0318 12:57:43.169358 57 log.go:172] (0xc000b5e420) Reply frame received for 3\nI0318 12:57:43.169415 57 log.go:172] (0xc000b5e420) (0xc000840000) Create stream\nI0318 12:57:43.169434 57 log.go:172] (0xc000b5e420) (0xc000840000) Stream added, broadcasting: 5\nI0318 12:57:43.170316 57 log.go:172] (0xc000b5e420) Reply frame received for 5\nI0318 12:57:43.257849 57 log.go:172] (0xc000b5e420) Data frame received for 5\nI0318 12:57:43.257888 57 log.go:172] (0xc000840000) (5) Data frame handling\nI0318 12:57:43.257915 57 log.go:172] (0xc000b5e420) Data frame received for 3\nI0318 12:57:43.257955 57 log.go:172] (0xc000768000) (3) Data frame handling\nI0318 12:57:43.257974 57 log.go:172] (0xc000768000) (3) Data frame sent\nI0318 12:57:43.257985 57 log.go:172] (0xc000b5e420) Data frame received for 3\nI0318 12:57:43.257994 57 log.go:172] (0xc000768000) (3) Data frame handling\nI0318 12:57:43.259360 57 log.go:172] (0xc000b5e420) Data frame received for 1\nI0318 12:57:43.259381 57 log.go:172] (0xc000472b40) (1) Data frame handling\nI0318 12:57:43.259393 57 log.go:172] (0xc000472b40) (1) Data frame sent\nI0318 12:57:43.259416 57 log.go:172] (0xc000b5e420) (0xc000472b40) Stream removed, broadcasting: 1\nI0318 12:57:43.259445 57 log.go:172] (0xc000b5e420) Go away received\nI0318 12:57:43.259799 57 log.go:172] (0xc000b5e420) (0xc000472b40) Stream removed, broadcasting: 1\nI0318 12:57:43.259821 57 log.go:172] (0xc000b5e420) (0xc000768000) Stream removed, broadcasting: 3\nI0318 12:57:43.259833 57 log.go:172] (0xc000b5e420) (0xc000840000) Stream removed, broadcasting: 5\n" Mar 18 12:57:43.264: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:57:43.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1467" for this suite. Mar 18 12:57:49.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:57:49.357: INFO: namespace emptydir-1467 deletion completed in 6.089894439s • [SLOW TEST:12.702 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:57:49.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 12:57:49.396: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bde036ae-52b8-4373-a4b7-d3ef522b1df8" in namespace "projected-7295" to be "success or failure" Mar 18 12:57:49.415: INFO: Pod "downwardapi-volume-bde036ae-52b8-4373-a4b7-d3ef522b1df8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.683058ms Mar 18 12:57:51.418: INFO: Pod "downwardapi-volume-bde036ae-52b8-4373-a4b7-d3ef522b1df8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022276623s Mar 18 12:57:53.423: INFO: Pod "downwardapi-volume-bde036ae-52b8-4373-a4b7-d3ef522b1df8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026842261s STEP: Saw pod success Mar 18 12:57:53.423: INFO: Pod "downwardapi-volume-bde036ae-52b8-4373-a4b7-d3ef522b1df8" satisfied condition "success or failure" Mar 18 12:57:53.426: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-bde036ae-52b8-4373-a4b7-d3ef522b1df8 container client-container: STEP: delete the pod Mar 18 12:57:53.458: INFO: Waiting for pod downwardapi-volume-bde036ae-52b8-4373-a4b7-d3ef522b1df8 to disappear Mar 18 12:57:53.474: INFO: Pod downwardapi-volume-bde036ae-52b8-4373-a4b7-d3ef522b1df8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:57:53.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7295" for this suite. Mar 18 12:57:59.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:57:59.577: INFO: namespace projected-7295 deletion completed in 6.100507928s • [SLOW TEST:10.219 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:57:59.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Mar 18 12:57:59.649: INFO: Waiting up to 5m0s for pod "pod-e354af41-da8d-44be-89cb-0962b02f6b3b" in namespace "emptydir-5542" to be "success or failure" Mar 18 12:57:59.653: INFO: Pod "pod-e354af41-da8d-44be-89cb-0962b02f6b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.112969ms Mar 18 12:58:01.657: INFO: Pod "pod-e354af41-da8d-44be-89cb-0962b02f6b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007583736s Mar 18 12:58:03.661: INFO: Pod "pod-e354af41-da8d-44be-89cb-0962b02f6b3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011938722s STEP: Saw pod success Mar 18 12:58:03.661: INFO: Pod "pod-e354af41-da8d-44be-89cb-0962b02f6b3b" satisfied condition "success or failure" Mar 18 12:58:03.664: INFO: Trying to get logs from node iruya-worker2 pod pod-e354af41-da8d-44be-89cb-0962b02f6b3b container test-container: STEP: delete the pod Mar 18 12:58:03.699: INFO: Waiting for pod pod-e354af41-da8d-44be-89cb-0962b02f6b3b to disappear Mar 18 12:58:03.714: INFO: Pod pod-e354af41-da8d-44be-89cb-0962b02f6b3b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:58:03.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5542" for this suite. Mar 18 12:58:09.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:58:09.808: INFO: namespace emptydir-5542 deletion completed in 6.091166702s • [SLOW TEST:10.231 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:58:09.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8154 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 18 12:58:09.840: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 18 12:58:35.959: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.242:8080/dial?request=hostName&protocol=http&host=10.244.2.241&port=8080&tries=1'] Namespace:pod-network-test-8154 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 12:58:35.959: INFO: >>> kubeConfig: /root/.kube/config I0318 12:58:35.988608 6 log.go:172] (0xc002b56840) (0xc002cdc780) Create stream I0318 12:58:35.988640 6 log.go:172] (0xc002b56840) (0xc002cdc780) Stream added, broadcasting: 1 I0318 12:58:35.990981 6 log.go:172] (0xc002b56840) Reply frame received for 1 I0318 12:58:35.991022 6 log.go:172] (0xc002b56840) (0xc0005d8140) Create stream I0318 12:58:35.991036 6 log.go:172] (0xc002b56840) (0xc0005d8140) Stream added, broadcasting: 3 I0318 12:58:35.992091 6 log.go:172] (0xc002b56840) Reply frame received for 3 I0318 12:58:35.992144 6 log.go:172] (0xc002b56840) (0xc002871e00) Create stream I0318 12:58:35.992160 6 log.go:172] (0xc002b56840) (0xc002871e00) Stream added, broadcasting: 5 I0318 12:58:35.993074 6 log.go:172] (0xc002b56840) Reply frame received for 5 I0318 12:58:36.083068 6 log.go:172] (0xc002b56840) Data frame received for 3 I0318 12:58:36.083096 6 log.go:172] (0xc0005d8140) (3) Data frame handling I0318 12:58:36.083112 6 log.go:172] (0xc0005d8140) (3) Data frame sent I0318 12:58:36.083583 6 log.go:172] (0xc002b56840) Data frame received for 5 I0318 12:58:36.083618 6 log.go:172] (0xc002871e00) (5) Data frame handling I0318 12:58:36.083696 6 log.go:172] (0xc002b56840) Data frame received for 3 I0318 12:58:36.083725 6 log.go:172] (0xc0005d8140) (3) Data frame handling I0318 12:58:36.085484 6 log.go:172] (0xc002b56840) Data frame received for 1 I0318 12:58:36.085502 6 log.go:172] (0xc002cdc780) (1) Data frame handling I0318 12:58:36.085512 6 log.go:172] (0xc002cdc780) (1) Data frame sent I0318 12:58:36.085624 6 log.go:172] (0xc002b56840) (0xc002cdc780) Stream removed, broadcasting: 1 I0318 12:58:36.085647 6 log.go:172] (0xc002b56840) Go away received I0318 12:58:36.085945 6 log.go:172] (0xc002b56840) (0xc002cdc780) Stream removed, broadcasting: 1 I0318 12:58:36.085960 6 log.go:172] (0xc002b56840) (0xc0005d8140) Stream removed, broadcasting: 3 I0318 12:58:36.085966 6 log.go:172] (0xc002b56840) (0xc002871e00) Stream removed, broadcasting: 5 Mar 18 12:58:36.086: INFO: Waiting for endpoints: map[] Mar 18 12:58:36.089: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.242:8080/dial?request=hostName&protocol=http&host=10.244.1.234&port=8080&tries=1'] Namespace:pod-network-test-8154 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 12:58:36.089: INFO: >>> kubeConfig: /root/.kube/config I0318 12:58:36.117036 6 log.go:172] (0xc002de3290) (0xc0005d8500) Create stream I0318 12:58:36.117056 6 log.go:172] (0xc002de3290) (0xc0005d8500) Stream added, broadcasting: 1 I0318 12:58:36.119613 6 log.go:172] (0xc002de3290) Reply frame received for 1 I0318 12:58:36.119660 6 log.go:172] (0xc002de3290) (0xc001b92d20) Create stream I0318 12:58:36.119672 6 log.go:172] (0xc002de3290) (0xc001b92d20) Stream added, broadcasting: 3 I0318 12:58:36.120650 6 log.go:172] (0xc002de3290) Reply frame received for 3 I0318 12:58:36.120712 6 log.go:172] (0xc002de3290) (0xc0005d8aa0) Create stream I0318 12:58:36.120735 6 log.go:172] (0xc002de3290) (0xc0005d8aa0) Stream added, broadcasting: 5 I0318 12:58:36.121832 6 log.go:172] (0xc002de3290) Reply frame received for 5 I0318 12:58:36.188081 6 log.go:172] (0xc002de3290) Data frame received for 3 I0318 12:58:36.188114 6 log.go:172] (0xc001b92d20) (3) Data frame handling I0318 12:58:36.188147 6 log.go:172] (0xc001b92d20) (3) Data frame sent I0318 12:58:36.188515 6 log.go:172] (0xc002de3290) Data frame received for 3 I0318 12:58:36.188545 6 log.go:172] (0xc001b92d20) (3) Data frame handling I0318 12:58:36.188567 6 log.go:172] (0xc002de3290) Data frame received for 5 I0318 12:58:36.188579 6 log.go:172] (0xc0005d8aa0) (5) Data frame handling I0318 12:58:36.190349 6 log.go:172] (0xc002de3290) Data frame received for 1 I0318 12:58:36.190377 6 log.go:172] (0xc0005d8500) (1) Data frame handling I0318 12:58:36.190388 6 log.go:172] (0xc0005d8500) (1) Data frame sent I0318 12:58:36.190400 6 log.go:172] (0xc002de3290) (0xc0005d8500) Stream removed, broadcasting: 1 I0318 12:58:36.190449 6 log.go:172] (0xc002de3290) Go away received I0318 12:58:36.190480 6 log.go:172] (0xc002de3290) (0xc0005d8500) Stream removed, broadcasting: 1 I0318 12:58:36.190490 6 log.go:172] (0xc002de3290) (0xc001b92d20) Stream removed, broadcasting: 3 I0318 12:58:36.190502 6 log.go:172] (0xc002de3290) (0xc0005d8aa0) Stream removed, broadcasting: 5 Mar 18 12:58:36.190: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:58:36.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8154" for this suite. Mar 18 12:58:58.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:58:58.292: INFO: namespace pod-network-test-8154 deletion completed in 22.097285599s • [SLOW TEST:48.483 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:58:58.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-1abad23c-1769-4c28-a18a-6fdbaf26c42d STEP: Creating a pod to test consume secrets Mar 18 12:58:58.361: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a7e4d398-847a-4386-b423-588516c8b4b2" in namespace "projected-5324" to be "success or failure" Mar 18 12:58:58.367: INFO: Pod "pod-projected-secrets-a7e4d398-847a-4386-b423-588516c8b4b2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.733946ms Mar 18 12:59:00.371: INFO: Pod "pod-projected-secrets-a7e4d398-847a-4386-b423-588516c8b4b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009415492s Mar 18 12:59:02.375: INFO: Pod "pod-projected-secrets-a7e4d398-847a-4386-b423-588516c8b4b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013963524s STEP: Saw pod success Mar 18 12:59:02.375: INFO: Pod "pod-projected-secrets-a7e4d398-847a-4386-b423-588516c8b4b2" satisfied condition "success or failure" Mar 18 12:59:02.378: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-a7e4d398-847a-4386-b423-588516c8b4b2 container projected-secret-volume-test: STEP: delete the pod Mar 18 12:59:02.410: INFO: Waiting for pod pod-projected-secrets-a7e4d398-847a-4386-b423-588516c8b4b2 to disappear Mar 18 12:59:02.420: INFO: Pod pod-projected-secrets-a7e4d398-847a-4386-b423-588516c8b4b2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:59:02.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5324" for this suite. Mar 18 12:59:08.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:59:08.513: INFO: namespace projected-5324 deletion completed in 6.089518706s • [SLOW TEST:10.220 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:59:08.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-1238c214-fc12-4d1b-a200-59c6596353c7 STEP: Creating a pod to test consume secrets Mar 18 12:59:08.614: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b12111c6-d4ec-4761-80ab-122fd8a89c75" in namespace "projected-5059" to be "success or failure" Mar 18 12:59:08.624: INFO: Pod "pod-projected-secrets-b12111c6-d4ec-4761-80ab-122fd8a89c75": Phase="Pending", Reason="", readiness=false. Elapsed: 9.731198ms Mar 18 12:59:10.628: INFO: Pod "pod-projected-secrets-b12111c6-d4ec-4761-80ab-122fd8a89c75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013496751s Mar 18 12:59:12.632: INFO: Pod "pod-projected-secrets-b12111c6-d4ec-4761-80ab-122fd8a89c75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017929803s STEP: Saw pod success Mar 18 12:59:12.632: INFO: Pod "pod-projected-secrets-b12111c6-d4ec-4761-80ab-122fd8a89c75" satisfied condition "success or failure" Mar 18 12:59:12.636: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-b12111c6-d4ec-4761-80ab-122fd8a89c75 container projected-secret-volume-test: STEP: delete the pod Mar 18 12:59:12.656: INFO: Waiting for pod pod-projected-secrets-b12111c6-d4ec-4761-80ab-122fd8a89c75 to disappear Mar 18 12:59:12.660: INFO: Pod pod-projected-secrets-b12111c6-d4ec-4761-80ab-122fd8a89c75 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:59:12.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5059" for this suite. Mar 18 12:59:18.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:59:18.753: INFO: namespace projected-5059 deletion completed in 6.090614714s • [SLOW TEST:10.239 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:59:18.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 12:59:18.819: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8607447b-2e1a-450e-a55b-8b33d1ef40d6" in namespace "downward-api-1307" to be "success or failure" Mar 18 12:59:18.839: INFO: Pod "downwardapi-volume-8607447b-2e1a-450e-a55b-8b33d1ef40d6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.370704ms Mar 18 12:59:20.842: INFO: Pod "downwardapi-volume-8607447b-2e1a-450e-a55b-8b33d1ef40d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023220575s Mar 18 12:59:22.846: INFO: Pod "downwardapi-volume-8607447b-2e1a-450e-a55b-8b33d1ef40d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026842611s STEP: Saw pod success Mar 18 12:59:22.846: INFO: Pod "downwardapi-volume-8607447b-2e1a-450e-a55b-8b33d1ef40d6" satisfied condition "success or failure" Mar 18 12:59:22.849: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8607447b-2e1a-450e-a55b-8b33d1ef40d6 container client-container: STEP: delete the pod Mar 18 12:59:22.880: INFO: Waiting for pod downwardapi-volume-8607447b-2e1a-450e-a55b-8b33d1ef40d6 to disappear Mar 18 12:59:22.889: INFO: Pod downwardapi-volume-8607447b-2e1a-450e-a55b-8b33d1ef40d6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:59:22.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1307" for this suite. Mar 18 12:59:28.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:59:29.008: INFO: namespace downward-api-1307 deletion completed in 6.116620316s • [SLOW TEST:10.255 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:59:29.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 18 12:59:29.088: INFO: Waiting up to 5m0s for pod "pod-37a72757-1697-426d-a7b3-20cfec94c029" in namespace "emptydir-4361" to be "success or failure" Mar 18 12:59:29.142: INFO: Pod "pod-37a72757-1697-426d-a7b3-20cfec94c029": Phase="Pending", Reason="", readiness=false. Elapsed: 53.622661ms Mar 18 12:59:31.146: INFO: Pod "pod-37a72757-1697-426d-a7b3-20cfec94c029": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057935488s Mar 18 12:59:33.150: INFO: Pod "pod-37a72757-1697-426d-a7b3-20cfec94c029": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062301901s STEP: Saw pod success Mar 18 12:59:33.151: INFO: Pod "pod-37a72757-1697-426d-a7b3-20cfec94c029" satisfied condition "success or failure" Mar 18 12:59:33.154: INFO: Trying to get logs from node iruya-worker pod pod-37a72757-1697-426d-a7b3-20cfec94c029 container test-container: STEP: delete the pod Mar 18 12:59:33.181: INFO: Waiting for pod pod-37a72757-1697-426d-a7b3-20cfec94c029 to disappear Mar 18 12:59:33.195: INFO: Pod pod-37a72757-1697-426d-a7b3-20cfec94c029 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:59:33.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4361" for this suite. Mar 18 12:59:39.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:59:39.328: INFO: namespace emptydir-4361 deletion completed in 6.130736371s • [SLOW TEST:10.318 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:59:39.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Mar 18 12:59:39.378: INFO: Waiting up to 5m0s for pod "client-containers-9b3d9f9b-9e54-4a8d-b0de-85d512fe3fb0" in namespace "containers-2924" to be "success or failure" Mar 18 12:59:39.403: INFO: Pod "client-containers-9b3d9f9b-9e54-4a8d-b0de-85d512fe3fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 25.028006ms Mar 18 12:59:41.407: INFO: Pod "client-containers-9b3d9f9b-9e54-4a8d-b0de-85d512fe3fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02971364s Mar 18 12:59:43.411: INFO: Pod "client-containers-9b3d9f9b-9e54-4a8d-b0de-85d512fe3fb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033748313s STEP: Saw pod success Mar 18 12:59:43.412: INFO: Pod "client-containers-9b3d9f9b-9e54-4a8d-b0de-85d512fe3fb0" satisfied condition "success or failure" Mar 18 12:59:43.415: INFO: Trying to get logs from node iruya-worker2 pod client-containers-9b3d9f9b-9e54-4a8d-b0de-85d512fe3fb0 container test-container: STEP: delete the pod Mar 18 12:59:43.446: INFO: Waiting for pod client-containers-9b3d9f9b-9e54-4a8d-b0de-85d512fe3fb0 to disappear Mar 18 12:59:43.469: INFO: Pod client-containers-9b3d9f9b-9e54-4a8d-b0de-85d512fe3fb0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 12:59:43.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2924" for this suite. Mar 18 12:59:49.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 12:59:49.560: INFO: namespace containers-2924 deletion completed in 6.088652232s • [SLOW TEST:10.232 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 12:59:49.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 12:59:49.633: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 18 12:59:49.695: INFO: Number of nodes with available pods: 0 Mar 18 12:59:49.695: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 18 12:59:49.758: INFO: Number of nodes with available pods: 0 Mar 18 12:59:49.758: INFO: Node iruya-worker is running more than one daemon pod Mar 18 12:59:50.763: INFO: Number of nodes with available pods: 0 Mar 18 12:59:50.763: INFO: Node iruya-worker is running more than one daemon pod Mar 18 12:59:51.762: INFO: Number of nodes with available pods: 0 Mar 18 12:59:51.762: INFO: Node iruya-worker is running more than one daemon pod Mar 18 12:59:52.763: INFO: Number of nodes with available pods: 0 Mar 18 12:59:52.763: INFO: Node iruya-worker is running more than one daemon pod Mar 18 12:59:53.762: INFO: Number of nodes with available pods: 1 Mar 18 12:59:53.762: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 18 12:59:53.811: INFO: Number of nodes with available pods: 1 Mar 18 12:59:53.811: INFO: Number of running nodes: 0, number of available pods: 1 Mar 18 12:59:54.815: INFO: Number of nodes with available pods: 0 Mar 18 12:59:54.815: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 18 12:59:54.830: INFO: Number of nodes with available pods: 0 Mar 18 12:59:54.830: INFO: Node iruya-worker is running more than one daemon pod Mar 18 12:59:55.834: INFO: Number of nodes with available pods: 0 Mar 18 12:59:55.834: INFO: Node iruya-worker is running more than one daemon pod Mar 18 12:59:56.834: INFO: Number of nodes with available pods: 0 Mar 18 12:59:56.835: INFO: Node iruya-worker is running more than one daemon pod Mar 18 12:59:57.834: INFO: Number of nodes with available pods: 0 Mar 18 12:59:57.834: INFO: Node iruya-worker is running more than one daemon pod Mar 18 12:59:58.835: INFO: Number of nodes with available pods: 0 Mar 18 12:59:58.835: INFO: Node iruya-worker is running more than one daemon pod Mar 18 12:59:59.834: INFO: Number of nodes with available pods: 0 Mar 18 12:59:59.834: INFO: Node iruya-worker is running more than one daemon pod Mar 18 13:00:00.835: INFO: Number of nodes with available pods: 0 Mar 18 13:00:00.835: INFO: Node iruya-worker is running more than one daemon pod Mar 18 13:00:01.834: INFO: Number of nodes with available pods: 0 Mar 18 13:00:01.835: INFO: Node iruya-worker is running more than one daemon pod Mar 18 13:00:02.834: INFO: Number of nodes with available pods: 0 Mar 18 13:00:02.834: INFO: Node iruya-worker is running more than one daemon pod Mar 18 13:00:03.834: INFO: Number of nodes with available pods: 0 Mar 18 13:00:03.834: INFO: Node iruya-worker is running more than one daemon pod Mar 18 13:00:04.834: INFO: Number of nodes with available pods: 0 Mar 18 13:00:04.834: INFO: Node iruya-worker is running more than one daemon pod Mar 18 13:00:05.835: INFO: Number of nodes with available pods: 1 Mar 18 13:00:05.835: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2781, will wait for the garbage collector to delete the pods Mar 18 13:00:05.902: INFO: Deleting DaemonSet.extensions daemon-set took: 7.051379ms Mar 18 13:00:06.202: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.282006ms Mar 18 13:00:12.315: INFO: Number of nodes with available pods: 0 Mar 18 13:00:12.315: INFO: Number of running nodes: 0, number of available pods: 0 Mar 18 13:00:12.322: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2781/daemonsets","resourceVersion":"513696"},"items":null} Mar 18 13:00:12.325: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2781/pods","resourceVersion":"513696"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:00:12.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2781" for this suite. Mar 18 13:00:18.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:00:18.463: INFO: namespace daemonsets-2781 deletion completed in 6.10276138s • [SLOW TEST:28.902 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:00:18.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-89b307bd-d1df-4eaa-bb1b-9470c3fdef66 STEP: Creating configMap with name cm-test-opt-upd-bdfdc0c4-8238-4c13-b7f2-3334e649449d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-89b307bd-d1df-4eaa-bb1b-9470c3fdef66 STEP: Updating configmap cm-test-opt-upd-bdfdc0c4-8238-4c13-b7f2-3334e649449d STEP: Creating configMap with name cm-test-opt-create-9d4ce2bd-fb81-466e-af61-867eda8879bc STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:01:34.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8396" for this suite. Mar 18 13:01:56.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:01:57.063: INFO: namespace configmap-8396 deletion completed in 22.089247824s • [SLOW TEST:98.600 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:01:57.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 18 13:01:57.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-3566' Mar 18 13:01:57.209: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 13:01:57.209: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Mar 18 13:01:59.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3566' Mar 18 13:01:59.426: INFO: stderr: "" Mar 18 13:01:59.426: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:01:59.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3566" for this suite. Mar 18 13:03:21.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:03:21.525: INFO: namespace kubectl-3566 deletion completed in 1m22.09563596s • [SLOW TEST:84.461 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:03:21.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-2588d960-6638-4354-bf45-d8b5233b602f in namespace container-probe-3547 Mar 18 13:03:25.622: INFO: Started pod busybox-2588d960-6638-4354-bf45-d8b5233b602f in namespace container-probe-3547 STEP: checking the pod's current state and verifying that restartCount is present Mar 18 13:03:25.625: INFO: Initial restart count of pod busybox-2588d960-6638-4354-bf45-d8b5233b602f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:07:26.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3547" for this suite. Mar 18 13:07:32.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:07:32.312: INFO: namespace container-probe-3547 deletion completed in 6.107167701s • [SLOW TEST:250.787 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:07:32.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 18 13:07:32.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9607' Mar 18 13:07:32.685: INFO: stderr: "" Mar 18 13:07:32.685: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 13:07:32.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9607' Mar 18 13:07:32.817: INFO: stderr: "" Mar 18 13:07:32.817: INFO: stdout: "update-demo-nautilus-qqt52 update-demo-nautilus-xdn7d " Mar 18 13:07:32.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qqt52 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9607' Mar 18 13:07:32.909: INFO: stderr: "" Mar 18 13:07:32.909: INFO: stdout: "" Mar 18 13:07:32.910: INFO: update-demo-nautilus-qqt52 is created but not running Mar 18 13:07:37.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9607' Mar 18 13:07:38.015: INFO: stderr: "" Mar 18 13:07:38.015: INFO: stdout: "update-demo-nautilus-qqt52 update-demo-nautilus-xdn7d " Mar 18 13:07:38.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qqt52 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9607' Mar 18 13:07:38.107: INFO: stderr: "" Mar 18 13:07:38.107: INFO: stdout: "true" Mar 18 13:07:38.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qqt52 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9607' Mar 18 13:07:38.194: INFO: stderr: "" Mar 18 13:07:38.194: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 13:07:38.194: INFO: validating pod update-demo-nautilus-qqt52 Mar 18 13:07:38.198: INFO: got data: { "image": "nautilus.jpg" } Mar 18 13:07:38.198: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 13:07:38.198: INFO: update-demo-nautilus-qqt52 is verified up and running Mar 18 13:07:38.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xdn7d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9607' Mar 18 13:07:38.291: INFO: stderr: "" Mar 18 13:07:38.291: INFO: stdout: "true" Mar 18 13:07:38.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xdn7d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9607' Mar 18 13:07:38.388: INFO: stderr: "" Mar 18 13:07:38.388: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 13:07:38.388: INFO: validating pod update-demo-nautilus-xdn7d Mar 18 13:07:38.392: INFO: got data: { "image": "nautilus.jpg" } Mar 18 13:07:38.392: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 13:07:38.392: INFO: update-demo-nautilus-xdn7d is verified up and running STEP: scaling down the replication controller Mar 18 13:07:38.394: INFO: scanned /root for discovery docs: Mar 18 13:07:38.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9607' Mar 18 13:07:39.530: INFO: stderr: "" Mar 18 13:07:39.530: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 13:07:39.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9607' Mar 18 13:07:39.623: INFO: stderr: "" Mar 18 13:07:39.623: INFO: stdout: "update-demo-nautilus-qqt52 update-demo-nautilus-xdn7d " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 18 13:07:44.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9607' Mar 18 13:07:46.944: INFO: stderr: "" Mar 18 13:07:46.944: INFO: stdout: "update-demo-nautilus-xdn7d " Mar 18 13:07:46.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xdn7d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9607' Mar 18 13:07:47.032: INFO: stderr: "" Mar 18 13:07:47.032: INFO: stdout: "true" Mar 18 13:07:47.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xdn7d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9607' Mar 18 13:07:47.119: INFO: stderr: "" Mar 18 13:07:47.120: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 13:07:47.120: INFO: validating pod update-demo-nautilus-xdn7d Mar 18 13:07:47.123: INFO: got data: { "image": "nautilus.jpg" } Mar 18 13:07:47.123: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 13:07:47.123: INFO: update-demo-nautilus-xdn7d is verified up and running STEP: scaling up the replication controller Mar 18 13:07:47.125: INFO: scanned /root for discovery docs: Mar 18 13:07:47.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9607' Mar 18 13:07:48.261: INFO: stderr: "" Mar 18 13:07:48.261: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 13:07:48.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9607' Mar 18 13:07:48.355: INFO: stderr: "" Mar 18 13:07:48.355: INFO: stdout: "update-demo-nautilus-bqwtw update-demo-nautilus-xdn7d " Mar 18 13:07:48.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bqwtw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9607' Mar 18 13:07:48.450: INFO: stderr: "" Mar 18 13:07:48.450: INFO: stdout: "" Mar 18 13:07:48.450: INFO: update-demo-nautilus-bqwtw is created but not running Mar 18 13:07:53.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9607' Mar 18 13:07:53.561: INFO: stderr: "" Mar 18 13:07:53.561: INFO: stdout: "update-demo-nautilus-bqwtw update-demo-nautilus-xdn7d " Mar 18 13:07:53.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bqwtw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9607' Mar 18 13:07:53.652: INFO: stderr: "" Mar 18 13:07:53.652: INFO: stdout: "true" Mar 18 13:07:53.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bqwtw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9607' Mar 18 13:07:53.738: INFO: stderr: "" Mar 18 13:07:53.738: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 13:07:53.738: INFO: validating pod update-demo-nautilus-bqwtw Mar 18 13:07:53.742: INFO: got data: { "image": "nautilus.jpg" } Mar 18 13:07:53.742: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 13:07:53.742: INFO: update-demo-nautilus-bqwtw is verified up and running Mar 18 13:07:53.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xdn7d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9607' Mar 18 13:07:53.829: INFO: stderr: "" Mar 18 13:07:53.829: INFO: stdout: "true" Mar 18 13:07:53.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xdn7d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9607' Mar 18 13:07:53.917: INFO: stderr: "" Mar 18 13:07:53.917: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 13:07:53.917: INFO: validating pod update-demo-nautilus-xdn7d Mar 18 13:07:53.921: INFO: got data: { "image": "nautilus.jpg" } Mar 18 13:07:53.921: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 13:07:53.921: INFO: update-demo-nautilus-xdn7d is verified up and running STEP: using delete to clean up resources Mar 18 13:07:53.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9607' Mar 18 13:07:54.044: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 13:07:54.044: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 18 13:07:54.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9607' Mar 18 13:07:54.134: INFO: stderr: "No resources found.\n" Mar 18 13:07:54.134: INFO: stdout: "" Mar 18 13:07:54.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9607 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 13:07:54.247: INFO: stderr: "" Mar 18 13:07:54.247: INFO: stdout: "update-demo-nautilus-bqwtw\nupdate-demo-nautilus-xdn7d\n" Mar 18 13:07:54.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9607' Mar 18 13:07:54.920: INFO: stderr: "No resources found.\n" Mar 18 13:07:54.920: INFO: stdout: "" Mar 18 13:07:54.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9607 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 13:07:55.036: INFO: stderr: "" Mar 18 13:07:55.036: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:07:55.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9607" for this suite. Mar 18 13:08:01.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:08:01.149: INFO: namespace kubectl-9607 deletion completed in 6.109128113s • [SLOW TEST:28.836 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:08:01.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 18 13:08:01.227: INFO: Waiting up to 5m0s for pod "downward-api-bcf998fa-d0a4-450d-9114-97000c9ee278" in namespace "downward-api-6291" to be "success or failure" Mar 18 13:08:01.231: INFO: Pod "downward-api-bcf998fa-d0a4-450d-9114-97000c9ee278": Phase="Pending", Reason="", readiness=false. Elapsed: 3.341279ms Mar 18 13:08:03.235: INFO: Pod "downward-api-bcf998fa-d0a4-450d-9114-97000c9ee278": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007429949s Mar 18 13:08:05.239: INFO: Pod "downward-api-bcf998fa-d0a4-450d-9114-97000c9ee278": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011667716s STEP: Saw pod success Mar 18 13:08:05.239: INFO: Pod "downward-api-bcf998fa-d0a4-450d-9114-97000c9ee278" satisfied condition "success or failure" Mar 18 13:08:05.242: INFO: Trying to get logs from node iruya-worker pod downward-api-bcf998fa-d0a4-450d-9114-97000c9ee278 container dapi-container: STEP: delete the pod Mar 18 13:08:05.292: INFO: Waiting for pod downward-api-bcf998fa-d0a4-450d-9114-97000c9ee278 to disappear Mar 18 13:08:05.302: INFO: Pod downward-api-bcf998fa-d0a4-450d-9114-97000c9ee278 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:08:05.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6291" for this suite. Mar 18 13:08:11.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:08:11.440: INFO: namespace downward-api-6291 deletion completed in 6.133594593s • [SLOW TEST:10.291 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:08:11.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-a61428a5-a966-4a93-bfab-9c8d9066c53f in namespace container-probe-7880 Mar 18 13:08:15.559: INFO: Started pod liveness-a61428a5-a966-4a93-bfab-9c8d9066c53f in namespace container-probe-7880 STEP: checking the pod's current state and verifying that restartCount is present Mar 18 13:08:15.562: INFO: Initial restart count of pod liveness-a61428a5-a966-4a93-bfab-9c8d9066c53f is 0 Mar 18 13:08:35.644: INFO: Restart count of pod container-probe-7880/liveness-a61428a5-a966-4a93-bfab-9c8d9066c53f is now 1 (20.082407979s elapsed) Mar 18 13:08:55.687: INFO: Restart count of pod container-probe-7880/liveness-a61428a5-a966-4a93-bfab-9c8d9066c53f is now 2 (40.124667376s elapsed) Mar 18 13:09:15.738: INFO: Restart count of pod container-probe-7880/liveness-a61428a5-a966-4a93-bfab-9c8d9066c53f is now 3 (1m0.176036824s elapsed) Mar 18 13:09:35.795: INFO: Restart count of pod container-probe-7880/liveness-a61428a5-a966-4a93-bfab-9c8d9066c53f is now 4 (1m20.232692998s elapsed) Mar 18 13:10:48.099: INFO: Restart count of pod container-probe-7880/liveness-a61428a5-a966-4a93-bfab-9c8d9066c53f is now 5 (2m32.536893614s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:10:48.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7880" for this suite. Mar 18 13:10:54.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:10:54.255: INFO: namespace container-probe-7880 deletion completed in 6.117519375s • [SLOW TEST:162.814 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:10:54.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 13:10:54.364: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:10:58.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-177" for this suite. Mar 18 13:11:44.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:11:44.618: INFO: namespace pods-177 deletion completed in 46.09518237s • [SLOW TEST:50.363 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:11:44.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 18 13:11:44.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3441' Mar 18 13:11:44.947: INFO: stderr: "" Mar 18 13:11:44.947: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 18 13:11:45.951: INFO: Selector matched 1 pods for map[app:redis] Mar 18 13:11:45.951: INFO: Found 0 / 1 Mar 18 13:11:46.951: INFO: Selector matched 1 pods for map[app:redis] Mar 18 13:11:46.951: INFO: Found 0 / 1 Mar 18 13:11:47.951: INFO: Selector matched 1 pods for map[app:redis] Mar 18 13:11:47.952: INFO: Found 1 / 1 Mar 18 13:11:47.952: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 18 13:11:47.956: INFO: Selector matched 1 pods for map[app:redis] Mar 18 13:11:47.956: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 18 13:11:47.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-687qr --namespace=kubectl-3441 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 18 13:11:48.075: INFO: stderr: "" Mar 18 13:11:48.075: INFO: stdout: "pod/redis-master-687qr patched\n" STEP: checking annotations Mar 18 13:11:48.079: INFO: Selector matched 1 pods for map[app:redis] Mar 18 13:11:48.079: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:11:48.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3441" for this suite. Mar 18 13:12:10.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:12:10.183: INFO: namespace kubectl-3441 deletion completed in 22.10215671s • [SLOW TEST:25.565 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:12:10.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-ea1564ee-495d-4255-adad-7295a7eb8a42 STEP: Creating a pod to test consume configMaps Mar 18 13:12:10.242: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4b05766-3069-4c9c-b707-1d95045fb69f" in namespace "configmap-7125" to be "success or failure" Mar 18 13:12:10.283: INFO: Pod "pod-configmaps-f4b05766-3069-4c9c-b707-1d95045fb69f": Phase="Pending", Reason="", readiness=false. Elapsed: 41.658707ms Mar 18 13:12:12.290: INFO: Pod "pod-configmaps-f4b05766-3069-4c9c-b707-1d95045fb69f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048113375s Mar 18 13:12:14.294: INFO: Pod "pod-configmaps-f4b05766-3069-4c9c-b707-1d95045fb69f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052265776s STEP: Saw pod success Mar 18 13:12:14.294: INFO: Pod "pod-configmaps-f4b05766-3069-4c9c-b707-1d95045fb69f" satisfied condition "success or failure" Mar 18 13:12:14.297: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-f4b05766-3069-4c9c-b707-1d95045fb69f container configmap-volume-test: STEP: delete the pod Mar 18 13:12:14.324: INFO: Waiting for pod pod-configmaps-f4b05766-3069-4c9c-b707-1d95045fb69f to disappear Mar 18 13:12:14.335: INFO: Pod pod-configmaps-f4b05766-3069-4c9c-b707-1d95045fb69f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:12:14.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7125" for this suite. Mar 18 13:12:20.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:12:20.470: INFO: namespace configmap-7125 deletion completed in 6.130373727s • [SLOW TEST:10.286 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:12:20.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5854 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5854 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5854 Mar 18 13:12:20.593: INFO: Found 0 stateful pods, waiting for 1 Mar 18 13:12:30.598: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 18 13:12:30.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5854 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 13:12:30.869: INFO: stderr: "I0318 13:12:30.731710 722 log.go:172] (0xc000934420) (0xc000270780) Create stream\nI0318 13:12:30.731772 722 log.go:172] (0xc000934420) (0xc000270780) Stream added, broadcasting: 1\nI0318 13:12:30.734901 722 log.go:172] (0xc000934420) Reply frame received for 1\nI0318 13:12:30.734949 722 log.go:172] (0xc000934420) (0xc0007ce000) Create stream\nI0318 13:12:30.734961 722 log.go:172] (0xc000934420) (0xc0007ce000) Stream added, broadcasting: 3\nI0318 13:12:30.735691 722 log.go:172] (0xc000934420) Reply frame received for 3\nI0318 13:12:30.735728 722 log.go:172] (0xc000934420) (0xc000270820) Create stream\nI0318 13:12:30.735749 722 log.go:172] (0xc000934420) (0xc000270820) Stream added, broadcasting: 5\nI0318 13:12:30.736471 722 log.go:172] (0xc000934420) Reply frame received for 5\nI0318 13:12:30.814234 722 log.go:172] (0xc000934420) Data frame received for 5\nI0318 13:12:30.814263 722 log.go:172] (0xc000270820) (5) Data frame handling\nI0318 13:12:30.814281 722 log.go:172] (0xc000270820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0318 13:12:30.863192 722 log.go:172] (0xc000934420) Data frame received for 3\nI0318 13:12:30.863218 722 log.go:172] (0xc0007ce000) (3) Data frame handling\nI0318 13:12:30.863234 722 log.go:172] (0xc0007ce000) (3) Data frame sent\nI0318 13:12:30.863424 722 log.go:172] (0xc000934420) Data frame received for 3\nI0318 13:12:30.863446 722 log.go:172] (0xc0007ce000) (3) Data frame handling\nI0318 13:12:30.863756 722 log.go:172] (0xc000934420) Data frame received for 5\nI0318 13:12:30.863798 722 log.go:172] (0xc000270820) (5) Data frame handling\nI0318 13:12:30.865718 722 log.go:172] (0xc000934420) Data frame received for 1\nI0318 13:12:30.865740 722 log.go:172] (0xc000270780) (1) Data frame handling\nI0318 13:12:30.865754 722 log.go:172] (0xc000270780) (1) Data frame sent\nI0318 13:12:30.865771 722 log.go:172] (0xc000934420) (0xc000270780) Stream removed, broadcasting: 1\nI0318 13:12:30.865792 722 log.go:172] (0xc000934420) Go away received\nI0318 13:12:30.866095 722 log.go:172] (0xc000934420) (0xc000270780) Stream removed, broadcasting: 1\nI0318 13:12:30.866112 722 log.go:172] (0xc000934420) (0xc0007ce000) Stream removed, broadcasting: 3\nI0318 13:12:30.866118 722 log.go:172] (0xc000934420) (0xc000270820) Stream removed, broadcasting: 5\n" Mar 18 13:12:30.870: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 13:12:30.870: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 13:12:30.873: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 18 13:12:40.878: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 18 13:12:40.878: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 13:12:40.891: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999961s Mar 18 13:12:41.899: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996216248s Mar 18 13:12:42.903: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.988898091s Mar 18 13:12:43.908: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.983958961s Mar 18 13:12:44.913: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.979109988s Mar 18 13:12:45.918: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.974297933s Mar 18 13:12:46.922: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.969450589s Mar 18 13:12:47.927: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.965307558s Mar 18 13:12:48.932: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.96041572s Mar 18 13:12:49.937: INFO: Verifying statefulset ss doesn't scale past 1 for another 955.953033ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5854 Mar 18 13:12:50.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5854 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 13:12:51.175: INFO: stderr: "I0318 13:12:51.080187 744 log.go:172] (0xc0001171e0) (0xc00059ea00) Create stream\nI0318 13:12:51.080254 744 log.go:172] (0xc0001171e0) (0xc00059ea00) Stream added, broadcasting: 1\nI0318 13:12:51.083986 744 log.go:172] (0xc0001171e0) Reply frame received for 1\nI0318 13:12:51.084044 744 log.go:172] (0xc0001171e0) (0xc00059e140) Create stream\nI0318 13:12:51.084061 744 log.go:172] (0xc0001171e0) (0xc00059e140) Stream added, broadcasting: 3\nI0318 13:12:51.085029 744 log.go:172] (0xc0001171e0) Reply frame received for 3\nI0318 13:12:51.085076 744 log.go:172] (0xc0001171e0) (0xc000024000) Create stream\nI0318 13:12:51.085091 744 log.go:172] (0xc0001171e0) (0xc000024000) Stream added, broadcasting: 5\nI0318 13:12:51.086230 744 log.go:172] (0xc0001171e0) Reply frame received for 5\nI0318 13:12:51.168540 744 log.go:172] (0xc0001171e0) Data frame received for 5\nI0318 13:12:51.168720 744 log.go:172] (0xc000024000) (5) Data frame handling\nI0318 13:12:51.168744 744 log.go:172] (0xc000024000) (5) Data frame sent\nI0318 13:12:51.168757 744 log.go:172] (0xc0001171e0) Data frame received for 5\nI0318 13:12:51.168768 744 log.go:172] (0xc000024000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0318 13:12:51.168806 744 log.go:172] (0xc0001171e0) Data frame received for 3\nI0318 13:12:51.168842 744 log.go:172] (0xc00059e140) (3) Data frame handling\nI0318 13:12:51.168874 744 log.go:172] (0xc00059e140) (3) Data frame sent\nI0318 13:12:51.168898 744 log.go:172] (0xc0001171e0) Data frame received for 3\nI0318 13:12:51.168912 744 log.go:172] (0xc00059e140) (3) Data frame handling\nI0318 13:12:51.170550 744 log.go:172] (0xc0001171e0) Data frame received for 1\nI0318 13:12:51.170587 744 log.go:172] (0xc00059ea00) (1) Data frame handling\nI0318 13:12:51.170614 744 log.go:172] (0xc00059ea00) (1) Data frame sent\nI0318 13:12:51.170631 744 log.go:172] (0xc0001171e0) (0xc00059ea00) Stream removed, broadcasting: 1\nI0318 13:12:51.170655 744 log.go:172] (0xc0001171e0) Go away received\nI0318 13:12:51.171106 744 log.go:172] (0xc0001171e0) (0xc00059ea00) Stream removed, broadcasting: 1\nI0318 13:12:51.171132 744 log.go:172] (0xc0001171e0) (0xc00059e140) Stream removed, broadcasting: 3\nI0318 13:12:51.171149 744 log.go:172] (0xc0001171e0) (0xc000024000) Stream removed, broadcasting: 5\n" Mar 18 13:12:51.175: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 13:12:51.175: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 13:12:51.179: INFO: Found 1 stateful pods, waiting for 3 Mar 18 13:13:01.187: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 13:13:01.187: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 13:13:01.187: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 18 13:13:01.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5854 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 13:13:01.437: INFO: stderr: "I0318 13:13:01.351305 764 log.go:172] (0xc00091a2c0) (0xc000a02640) Create stream\nI0318 13:13:01.351378 764 log.go:172] (0xc00091a2c0) (0xc000a02640) Stream added, broadcasting: 1\nI0318 13:13:01.353853 764 log.go:172] (0xc00091a2c0) Reply frame received for 1\nI0318 13:13:01.353885 764 log.go:172] (0xc00091a2c0) (0xc000a026e0) Create stream\nI0318 13:13:01.353894 764 log.go:172] (0xc00091a2c0) (0xc000a026e0) Stream added, broadcasting: 3\nI0318 13:13:01.355159 764 log.go:172] (0xc00091a2c0) Reply frame received for 3\nI0318 13:13:01.355204 764 log.go:172] (0xc00091a2c0) (0xc0008e2000) Create stream\nI0318 13:13:01.355219 764 log.go:172] (0xc00091a2c0) (0xc0008e2000) Stream added, broadcasting: 5\nI0318 13:13:01.356298 764 log.go:172] (0xc00091a2c0) Reply frame received for 5\nI0318 13:13:01.432015 764 log.go:172] (0xc00091a2c0) Data frame received for 3\nI0318 13:13:01.432046 764 log.go:172] (0xc000a026e0) (3) Data frame handling\nI0318 13:13:01.432065 764 log.go:172] (0xc000a026e0) (3) Data frame sent\nI0318 13:13:01.432177 764 log.go:172] (0xc00091a2c0) Data frame received for 5\nI0318 13:13:01.432221 764 log.go:172] (0xc0008e2000) (5) Data frame handling\nI0318 13:13:01.432250 764 log.go:172] (0xc0008e2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0318 13:13:01.432439 764 log.go:172] (0xc00091a2c0) Data frame received for 5\nI0318 13:13:01.432456 764 log.go:172] (0xc0008e2000) (5) Data frame handling\nI0318 13:13:01.432478 764 log.go:172] (0xc00091a2c0) Data frame received for 3\nI0318 13:13:01.432490 764 log.go:172] (0xc000a026e0) (3) Data frame handling\nI0318 13:13:01.433973 764 log.go:172] (0xc00091a2c0) Data frame received for 1\nI0318 13:13:01.433989 764 log.go:172] (0xc000a02640) (1) Data frame handling\nI0318 13:13:01.433999 764 log.go:172] (0xc000a02640) (1) Data frame sent\nI0318 13:13:01.434060 764 log.go:172] (0xc00091a2c0) (0xc000a02640) Stream removed, broadcasting: 1\nI0318 13:13:01.434239 764 log.go:172] (0xc00091a2c0) Go away received\nI0318 13:13:01.434572 764 log.go:172] (0xc00091a2c0) (0xc000a02640) Stream removed, broadcasting: 1\nI0318 13:13:01.434593 764 log.go:172] (0xc00091a2c0) (0xc000a026e0) Stream removed, broadcasting: 3\nI0318 13:13:01.434605 764 log.go:172] (0xc00091a2c0) (0xc0008e2000) Stream removed, broadcasting: 5\n" Mar 18 13:13:01.438: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 13:13:01.438: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 13:13:01.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5854 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 13:13:01.666: INFO: stderr: "I0318 13:13:01.576303 784 log.go:172] (0xc00012aa50) (0xc00051a6e0) Create stream\nI0318 13:13:01.576369 784 log.go:172] (0xc00012aa50) (0xc00051a6e0) Stream added, broadcasting: 1\nI0318 13:13:01.579505 784 log.go:172] (0xc00012aa50) Reply frame received for 1\nI0318 13:13:01.579556 784 log.go:172] (0xc00012aa50) (0xc00085c000) Create stream\nI0318 13:13:01.579580 784 log.go:172] (0xc00012aa50) (0xc00085c000) Stream added, broadcasting: 3\nI0318 13:13:01.580482 784 log.go:172] (0xc00012aa50) Reply frame received for 3\nI0318 13:13:01.580618 784 log.go:172] (0xc00012aa50) (0xc00051a780) Create stream\nI0318 13:13:01.580632 784 log.go:172] (0xc00012aa50) (0xc00051a780) Stream added, broadcasting: 5\nI0318 13:13:01.581758 784 log.go:172] (0xc00012aa50) Reply frame received for 5\nI0318 13:13:01.632564 784 log.go:172] (0xc00012aa50) Data frame received for 5\nI0318 13:13:01.632580 784 log.go:172] (0xc00051a780) (5) Data frame handling\nI0318 13:13:01.632586 784 log.go:172] (0xc00051a780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0318 13:13:01.659609 784 log.go:172] (0xc00012aa50) Data frame received for 5\nI0318 13:13:01.659666 784 log.go:172] (0xc00051a780) (5) Data frame handling\nI0318 13:13:01.659701 784 log.go:172] (0xc00012aa50) Data frame received for 3\nI0318 13:13:01.659725 784 log.go:172] (0xc00085c000) (3) Data frame handling\nI0318 13:13:01.659737 784 log.go:172] (0xc00085c000) (3) Data frame sent\nI0318 13:13:01.659748 784 log.go:172] (0xc00012aa50) Data frame received for 3\nI0318 13:13:01.659757 784 log.go:172] (0xc00085c000) (3) Data frame handling\nI0318 13:13:01.661831 784 log.go:172] (0xc00012aa50) Data frame received for 1\nI0318 13:13:01.661862 784 log.go:172] (0xc00051a6e0) (1) Data frame handling\nI0318 13:13:01.661887 784 log.go:172] (0xc00051a6e0) (1) Data frame sent\nI0318 13:13:01.661922 784 log.go:172] (0xc00012aa50) (0xc00051a6e0) Stream removed, broadcasting: 1\nI0318 13:13:01.662115 784 log.go:172] (0xc00012aa50) Go away received\nI0318 13:13:01.662307 784 log.go:172] (0xc00012aa50) (0xc00051a6e0) Stream removed, broadcasting: 1\nI0318 13:13:01.662329 784 log.go:172] (0xc00012aa50) (0xc00085c000) Stream removed, broadcasting: 3\nI0318 13:13:01.662340 784 log.go:172] (0xc00012aa50) (0xc00051a780) Stream removed, broadcasting: 5\n" Mar 18 13:13:01.666: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 13:13:01.666: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 13:13:01.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5854 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 13:13:01.897: INFO: stderr: "I0318 13:13:01.794883 806 log.go:172] (0xc0006e8bb0) (0xc00033a820) Create stream\nI0318 13:13:01.794923 806 log.go:172] (0xc0006e8bb0) (0xc00033a820) Stream added, broadcasting: 1\nI0318 13:13:01.797982 806 log.go:172] (0xc0006e8bb0) Reply frame received for 1\nI0318 13:13:01.798251 806 log.go:172] (0xc0006e8bb0) (0xc00087c000) Create stream\nI0318 13:13:01.798290 806 log.go:172] (0xc0006e8bb0) (0xc00087c000) Stream added, broadcasting: 3\nI0318 13:13:01.800472 806 log.go:172] (0xc0006e8bb0) Reply frame received for 3\nI0318 13:13:01.800526 806 log.go:172] (0xc0006e8bb0) (0xc00087c0a0) Create stream\nI0318 13:13:01.800542 806 log.go:172] (0xc0006e8bb0) (0xc00087c0a0) Stream added, broadcasting: 5\nI0318 13:13:01.801936 806 log.go:172] (0xc0006e8bb0) Reply frame received for 5\nI0318 13:13:01.861735 806 log.go:172] (0xc0006e8bb0) Data frame received for 5\nI0318 13:13:01.861766 806 log.go:172] (0xc00087c0a0) (5) Data frame handling\nI0318 13:13:01.861788 806 log.go:172] (0xc00087c0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0318 13:13:01.887442 806 log.go:172] (0xc0006e8bb0) Data frame received for 3\nI0318 13:13:01.887483 806 log.go:172] (0xc00087c000) (3) Data frame handling\nI0318 13:13:01.887518 806 log.go:172] (0xc00087c000) (3) Data frame sent\nI0318 13:13:01.887539 806 log.go:172] (0xc0006e8bb0) Data frame received for 3\nI0318 13:13:01.887553 806 log.go:172] (0xc00087c000) (3) Data frame handling\nI0318 13:13:01.887942 806 log.go:172] (0xc0006e8bb0) Data frame received for 5\nI0318 13:13:01.887961 806 log.go:172] (0xc00087c0a0) (5) Data frame handling\nI0318 13:13:01.893343 806 log.go:172] (0xc0006e8bb0) Data frame received for 1\nI0318 13:13:01.893366 806 log.go:172] (0xc00033a820) (1) Data frame handling\nI0318 13:13:01.893387 806 log.go:172] (0xc00033a820) (1) Data frame sent\nI0318 13:13:01.893474 806 log.go:172] (0xc0006e8bb0) (0xc00033a820) Stream removed, broadcasting: 1\nI0318 13:13:01.893660 806 log.go:172] (0xc0006e8bb0) Go away received\nI0318 13:13:01.893847 806 log.go:172] (0xc0006e8bb0) (0xc00033a820) Stream removed, broadcasting: 1\nI0318 13:13:01.893866 806 log.go:172] (0xc0006e8bb0) (0xc00087c000) Stream removed, broadcasting: 3\nI0318 13:13:01.893876 806 log.go:172] (0xc0006e8bb0) (0xc00087c0a0) Stream removed, broadcasting: 5\n" Mar 18 13:13:01.897: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 13:13:01.897: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 13:13:01.897: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 13:13:01.900: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 18 13:13:11.912: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 18 13:13:11.912: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 18 13:13:11.912: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 18 13:13:11.937: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999417s Mar 18 13:13:12.942: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.979374688s Mar 18 13:13:13.947: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.974719734s Mar 18 13:13:14.953: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.969317073s Mar 18 13:13:15.958: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.963895978s Mar 18 13:13:16.963: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.958692626s Mar 18 13:13:17.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.953557383s Mar 18 13:13:18.973: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.948102395s Mar 18 13:13:19.979: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.943489701s Mar 18 13:13:20.984: INFO: Verifying statefulset ss doesn't scale past 3 for another 937.90992ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5854 Mar 18 13:13:21.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5854 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 13:13:22.205: INFO: stderr: "I0318 13:13:22.110711 826 log.go:172] (0xc000118fd0) (0xc0005d4c80) Create stream\nI0318 13:13:22.110760 826 log.go:172] (0xc000118fd0) (0xc0005d4c80) Stream added, broadcasting: 1\nI0318 13:13:22.112721 826 log.go:172] (0xc000118fd0) Reply frame received for 1\nI0318 13:13:22.112752 826 log.go:172] (0xc000118fd0) (0xc000311b80) Create stream\nI0318 13:13:22.112764 826 log.go:172] (0xc000118fd0) (0xc000311b80) Stream added, broadcasting: 3\nI0318 13:13:22.113892 826 log.go:172] (0xc000118fd0) Reply frame received for 3\nI0318 13:13:22.113962 826 log.go:172] (0xc000118fd0) (0xc0005d4d20) Create stream\nI0318 13:13:22.113993 826 log.go:172] (0xc000118fd0) (0xc0005d4d20) Stream added, broadcasting: 5\nI0318 13:13:22.115280 826 log.go:172] (0xc000118fd0) Reply frame received for 5\nI0318 13:13:22.198207 826 log.go:172] (0xc000118fd0) Data frame received for 3\nI0318 13:13:22.198254 826 log.go:172] (0xc000311b80) (3) Data frame handling\nI0318 13:13:22.198269 826 log.go:172] (0xc000311b80) (3) Data frame sent\nI0318 13:13:22.198278 826 log.go:172] (0xc000118fd0) Data frame received for 3\nI0318 13:13:22.198287 826 log.go:172] (0xc000311b80) (3) Data frame handling\nI0318 13:13:22.198298 826 log.go:172] (0xc000118fd0) Data frame received for 5\nI0318 13:13:22.198306 826 log.go:172] (0xc0005d4d20) (5) Data frame handling\nI0318 13:13:22.198315 826 log.go:172] (0xc0005d4d20) (5) Data frame sent\nI0318 13:13:22.198332 826 log.go:172] (0xc000118fd0) Data frame received for 5\nI0318 13:13:22.198344 826 log.go:172] (0xc0005d4d20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0318 13:13:22.199800 826 log.go:172] (0xc000118fd0) Data frame received for 1\nI0318 13:13:22.199848 826 log.go:172] (0xc0005d4c80) (1) Data frame handling\nI0318 13:13:22.199868 826 log.go:172] (0xc0005d4c80) (1) Data frame sent\nI0318 13:13:22.199887 826 log.go:172] (0xc000118fd0) (0xc0005d4c80) Stream removed, broadcasting: 1\nI0318 13:13:22.199906 826 log.go:172] (0xc000118fd0) Go away received\nI0318 13:13:22.200909 826 log.go:172] (0xc000118fd0) (0xc0005d4c80) Stream removed, broadcasting: 1\nI0318 13:13:22.201036 826 log.go:172] (0xc000118fd0) (0xc000311b80) Stream removed, broadcasting: 3\nI0318 13:13:22.201291 826 log.go:172] (0xc000118fd0) (0xc0005d4d20) Stream removed, broadcasting: 5\n" Mar 18 13:13:22.205: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 13:13:22.205: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 13:13:22.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5854 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 13:13:22.417: INFO: stderr: "I0318 13:13:22.351052 850 log.go:172] (0xc000ac4420) (0xc0003f6820) Create stream\nI0318 13:13:22.351110 850 log.go:172] (0xc000ac4420) (0xc0003f6820) Stream added, broadcasting: 1\nI0318 13:13:22.357834 850 log.go:172] (0xc000ac4420) Reply frame received for 1\nI0318 13:13:22.357893 850 log.go:172] (0xc000ac4420) (0xc0003f6000) Create stream\nI0318 13:13:22.357912 850 log.go:172] (0xc000ac4420) (0xc0003f6000) Stream added, broadcasting: 3\nI0318 13:13:22.359327 850 log.go:172] (0xc000ac4420) Reply frame received for 3\nI0318 13:13:22.359360 850 log.go:172] (0xc000ac4420) (0xc0005ce3c0) Create stream\nI0318 13:13:22.359370 850 log.go:172] (0xc000ac4420) (0xc0005ce3c0) Stream added, broadcasting: 5\nI0318 13:13:22.360209 850 log.go:172] (0xc000ac4420) Reply frame received for 5\nI0318 13:13:22.412416 850 log.go:172] (0xc000ac4420) Data frame received for 5\nI0318 13:13:22.412435 850 log.go:172] (0xc0005ce3c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0318 13:13:22.412449 850 log.go:172] (0xc000ac4420) Data frame received for 3\nI0318 13:13:22.412466 850 log.go:172] (0xc0003f6000) (3) Data frame handling\nI0318 13:13:22.412479 850 log.go:172] (0xc0003f6000) (3) Data frame sent\nI0318 13:13:22.412488 850 log.go:172] (0xc000ac4420) Data frame received for 3\nI0318 13:13:22.412493 850 log.go:172] (0xc0003f6000) (3) Data frame handling\nI0318 13:13:22.412505 850 log.go:172] (0xc0005ce3c0) (5) Data frame sent\nI0318 13:13:22.412513 850 log.go:172] (0xc000ac4420) Data frame received for 5\nI0318 13:13:22.412518 850 log.go:172] (0xc0005ce3c0) (5) Data frame handling\nI0318 13:13:22.414166 850 log.go:172] (0xc000ac4420) Data frame received for 1\nI0318 13:13:22.414179 850 log.go:172] (0xc0003f6820) (1) Data frame handling\nI0318 13:13:22.414187 850 log.go:172] (0xc0003f6820) (1) Data frame sent\nI0318 13:13:22.414197 850 log.go:172] (0xc000ac4420) (0xc0003f6820) Stream removed, broadcasting: 1\nI0318 13:13:22.414225 850 log.go:172] (0xc000ac4420) Go away received\nI0318 13:13:22.414436 850 log.go:172] (0xc000ac4420) (0xc0003f6820) Stream removed, broadcasting: 1\nI0318 13:13:22.414446 850 log.go:172] (0xc000ac4420) (0xc0003f6000) Stream removed, broadcasting: 3\nI0318 13:13:22.414452 850 log.go:172] (0xc000ac4420) (0xc0005ce3c0) Stream removed, broadcasting: 5\n" Mar 18 13:13:22.417: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 13:13:22.417: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 13:13:22.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5854 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 13:13:22.617: INFO: stderr: "I0318 13:13:22.542091 870 log.go:172] (0xc000a1e420) (0xc0008985a0) Create stream\nI0318 13:13:22.542147 870 log.go:172] (0xc000a1e420) (0xc0008985a0) Stream added, broadcasting: 1\nI0318 13:13:22.549666 870 log.go:172] (0xc000a1e420) Reply frame received for 1\nI0318 13:13:22.549724 870 log.go:172] (0xc000a1e420) (0xc000898640) Create stream\nI0318 13:13:22.549742 870 log.go:172] (0xc000a1e420) (0xc000898640) Stream added, broadcasting: 3\nI0318 13:13:22.551082 870 log.go:172] (0xc000a1e420) Reply frame received for 3\nI0318 13:13:22.551406 870 log.go:172] (0xc000a1e420) (0xc0008ba000) Create stream\nI0318 13:13:22.551461 870 log.go:172] (0xc000a1e420) (0xc0008ba000) Stream added, broadcasting: 5\nI0318 13:13:22.552869 870 log.go:172] (0xc000a1e420) Reply frame received for 5\nI0318 13:13:22.611395 870 log.go:172] (0xc000a1e420) Data frame received for 3\nI0318 13:13:22.611417 870 log.go:172] (0xc000898640) (3) Data frame handling\nI0318 13:13:22.611425 870 log.go:172] (0xc000898640) (3) Data frame sent\nI0318 13:13:22.611431 870 log.go:172] (0xc000a1e420) Data frame received for 3\nI0318 13:13:22.611436 870 log.go:172] (0xc000898640) (3) Data frame handling\nI0318 13:13:22.611457 870 log.go:172] (0xc000a1e420) Data frame received for 5\nI0318 13:13:22.611483 870 log.go:172] (0xc0008ba000) (5) Data frame handling\nI0318 13:13:22.611503 870 log.go:172] (0xc0008ba000) (5) Data frame sent\nI0318 13:13:22.611515 870 log.go:172] (0xc000a1e420) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0318 13:13:22.611526 870 log.go:172] (0xc0008ba000) (5) Data frame handling\nI0318 13:13:22.613397 870 log.go:172] (0xc000a1e420) Data frame received for 1\nI0318 13:13:22.613484 870 log.go:172] (0xc0008985a0) (1) Data frame handling\nI0318 13:13:22.613523 870 log.go:172] (0xc0008985a0) (1) Data frame sent\nI0318 13:13:22.613551 870 log.go:172] (0xc000a1e420) (0xc0008985a0) Stream removed, broadcasting: 1\nI0318 13:13:22.613632 870 log.go:172] (0xc000a1e420) Go away received\nI0318 13:13:22.613936 870 log.go:172] (0xc000a1e420) (0xc0008985a0) Stream removed, broadcasting: 1\nI0318 13:13:22.613954 870 log.go:172] (0xc000a1e420) (0xc000898640) Stream removed, broadcasting: 3\nI0318 13:13:22.613961 870 log.go:172] (0xc000a1e420) (0xc0008ba000) Stream removed, broadcasting: 5\n" Mar 18 13:13:22.617: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 13:13:22.617: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 13:13:22.617: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 18 13:13:42.634: INFO: Deleting all statefulset in ns statefulset-5854 Mar 18 13:13:42.637: INFO: Scaling statefulset ss to 0 Mar 18 13:13:42.648: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 13:13:42.651: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:13:42.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5854" for this suite. Mar 18 13:13:48.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:13:48.805: INFO: namespace statefulset-5854 deletion completed in 6.124030041s • [SLOW TEST:88.334 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:13:48.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 18 13:13:51.911: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:13:51.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3487" for this suite. Mar 18 13:13:57.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:13:58.043: INFO: namespace container-runtime-3487 deletion completed in 6.092236001s • [SLOW TEST:9.238 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:13:58.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:14:03.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7959" for this suite. Mar 18 13:14:09.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:14:09.852: INFO: namespace watch-7959 deletion completed in 6.19698581s • [SLOW TEST:11.809 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:14:09.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-d3d3c260-f49d-4938-9e86-41e159844386 STEP: Creating a pod to test consume secrets Mar 18 13:14:09.919: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-af869afe-ded3-4b37-85c9-09879bbac47a" in namespace "projected-6106" to be "success or failure" Mar 18 13:14:09.962: INFO: Pod "pod-projected-secrets-af869afe-ded3-4b37-85c9-09879bbac47a": Phase="Pending", Reason="", readiness=false. Elapsed: 42.711129ms Mar 18 13:14:11.965: INFO: Pod "pod-projected-secrets-af869afe-ded3-4b37-85c9-09879bbac47a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045937497s Mar 18 13:14:13.969: INFO: Pod "pod-projected-secrets-af869afe-ded3-4b37-85c9-09879bbac47a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050102257s STEP: Saw pod success Mar 18 13:14:13.970: INFO: Pod "pod-projected-secrets-af869afe-ded3-4b37-85c9-09879bbac47a" satisfied condition "success or failure" Mar 18 13:14:13.972: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-af869afe-ded3-4b37-85c9-09879bbac47a container secret-volume-test: STEP: delete the pod Mar 18 13:14:13.992: INFO: Waiting for pod pod-projected-secrets-af869afe-ded3-4b37-85c9-09879bbac47a to disappear Mar 18 13:14:14.007: INFO: Pod pod-projected-secrets-af869afe-ded3-4b37-85c9-09879bbac47a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:14:14.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6106" for this suite. Mar 18 13:14:20.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:14:20.122: INFO: namespace projected-6106 deletion completed in 6.111994454s • [SLOW TEST:10.270 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:14:20.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Mar 18 13:14:20.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2804' Mar 18 13:14:20.486: INFO: stderr: "" Mar 18 13:14:20.486: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Mar 18 13:14:21.491: INFO: Selector matched 1 pods for map[app:redis] Mar 18 13:14:21.491: INFO: Found 0 / 1 Mar 18 13:14:22.491: INFO: Selector matched 1 pods for map[app:redis] Mar 18 13:14:22.491: INFO: Found 0 / 1 Mar 18 13:14:23.491: INFO: Selector matched 1 pods for map[app:redis] Mar 18 13:14:23.491: INFO: Found 1 / 1 Mar 18 13:14:23.491: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 18 13:14:23.495: INFO: Selector matched 1 pods for map[app:redis] Mar 18 13:14:23.495: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 18 13:14:23.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6m4z6 redis-master --namespace=kubectl-2804' Mar 18 13:14:23.600: INFO: stderr: "" Mar 18 13:14:23.600: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 18 Mar 13:14:22.925 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Mar 13:14:22.925 # Server started, Redis version 3.2.12\n1:M 18 Mar 13:14:22.925 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Mar 13:14:22.925 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 18 13:14:23.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6m4z6 redis-master --namespace=kubectl-2804 --tail=1' Mar 18 13:14:23.707: INFO: stderr: "" Mar 18 13:14:23.707: INFO: stdout: "1:M 18 Mar 13:14:22.925 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 18 13:14:23.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6m4z6 redis-master --namespace=kubectl-2804 --limit-bytes=1' Mar 18 13:14:23.813: INFO: stderr: "" Mar 18 13:14:23.814: INFO: stdout: " " STEP: exposing timestamps Mar 18 13:14:23.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6m4z6 redis-master --namespace=kubectl-2804 --tail=1 --timestamps' Mar 18 13:14:23.918: INFO: stderr: "" Mar 18 13:14:23.918: INFO: stdout: "2020-03-18T13:14:22.934835436Z 1:M 18 Mar 13:14:22.925 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 18 13:14:26.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6m4z6 redis-master --namespace=kubectl-2804 --since=1s' Mar 18 13:14:26.533: INFO: stderr: "" Mar 18 13:14:26.534: INFO: stdout: "" Mar 18 13:14:26.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6m4z6 redis-master --namespace=kubectl-2804 --since=24h' Mar 18 13:14:26.633: INFO: stderr: "" Mar 18 13:14:26.633: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 18 Mar 13:14:22.925 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Mar 13:14:22.925 # Server started, Redis version 3.2.12\n1:M 18 Mar 13:14:22.925 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Mar 13:14:22.925 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Mar 18 13:14:26.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2804' Mar 18 13:14:26.725: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 13:14:26.725: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 18 13:14:26.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-2804' Mar 18 13:14:26.818: INFO: stderr: "No resources found.\n" Mar 18 13:14:26.818: INFO: stdout: "" Mar 18 13:14:26.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-2804 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 13:14:26.916: INFO: stderr: "" Mar 18 13:14:26.916: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:14:26.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2804" for this suite. Mar 18 13:14:46.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:14:47.037: INFO: namespace kubectl-2804 deletion completed in 20.117623026s • [SLOW TEST:26.915 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:14:47.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 18 13:14:50.125: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:14:50.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-258" for this suite. Mar 18 13:14:56.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:14:56.269: INFO: namespace container-runtime-258 deletion completed in 6.107643445s • [SLOW TEST:9.232 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:14:56.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 13:14:56.364: INFO: Waiting up to 5m0s for pod "downwardapi-volume-856fe445-3540-40f8-8310-5a7a516f63dc" in namespace "projected-2574" to be "success or failure" Mar 18 13:14:56.374: INFO: Pod "downwardapi-volume-856fe445-3540-40f8-8310-5a7a516f63dc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.265966ms Mar 18 13:14:58.379: INFO: Pod "downwardapi-volume-856fe445-3540-40f8-8310-5a7a516f63dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014832562s Mar 18 13:15:00.383: INFO: Pod "downwardapi-volume-856fe445-3540-40f8-8310-5a7a516f63dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019284396s STEP: Saw pod success Mar 18 13:15:00.383: INFO: Pod "downwardapi-volume-856fe445-3540-40f8-8310-5a7a516f63dc" satisfied condition "success or failure" Mar 18 13:15:00.386: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-856fe445-3540-40f8-8310-5a7a516f63dc container client-container: STEP: delete the pod Mar 18 13:15:00.430: INFO: Waiting for pod downwardapi-volume-856fe445-3540-40f8-8310-5a7a516f63dc to disappear Mar 18 13:15:00.433: INFO: Pod downwardapi-volume-856fe445-3540-40f8-8310-5a7a516f63dc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:15:00.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2574" for this suite. Mar 18 13:15:06.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:15:06.544: INFO: namespace projected-2574 deletion completed in 6.108533507s • [SLOW TEST:10.275 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:15:06.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Mar 18 13:15:06.634: INFO: Waiting up to 5m0s for pod "client-containers-843fcc7b-7ab6-466d-9d8d-cfc401805fb0" in namespace "containers-1286" to be "success or failure" Mar 18 13:15:06.650: INFO: Pod "client-containers-843fcc7b-7ab6-466d-9d8d-cfc401805fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.180252ms Mar 18 13:15:08.655: INFO: Pod "client-containers-843fcc7b-7ab6-466d-9d8d-cfc401805fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021037587s Mar 18 13:15:10.659: INFO: Pod "client-containers-843fcc7b-7ab6-466d-9d8d-cfc401805fb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024986652s STEP: Saw pod success Mar 18 13:15:10.659: INFO: Pod "client-containers-843fcc7b-7ab6-466d-9d8d-cfc401805fb0" satisfied condition "success or failure" Mar 18 13:15:10.662: INFO: Trying to get logs from node iruya-worker2 pod client-containers-843fcc7b-7ab6-466d-9d8d-cfc401805fb0 container test-container: STEP: delete the pod Mar 18 13:15:10.682: INFO: Waiting for pod client-containers-843fcc7b-7ab6-466d-9d8d-cfc401805fb0 to disappear Mar 18 13:15:10.686: INFO: Pod client-containers-843fcc7b-7ab6-466d-9d8d-cfc401805fb0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:15:10.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1286" for this suite. Mar 18 13:15:16.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:15:16.798: INFO: namespace containers-1286 deletion completed in 6.109128634s • [SLOW TEST:10.253 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:15:16.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-85d80e3a-fc33-43e5-8070-5a19b47f921f STEP: Creating a pod to test consume secrets Mar 18 13:15:16.890: INFO: Waiting up to 5m0s for pod "pod-secrets-55060bee-4697-4584-8ef9-bcf89abd0325" in namespace "secrets-523" to be "success or failure" Mar 18 13:15:16.907: INFO: Pod "pod-secrets-55060bee-4697-4584-8ef9-bcf89abd0325": Phase="Pending", Reason="", readiness=false. Elapsed: 17.32917ms Mar 18 13:15:18.921: INFO: Pod "pod-secrets-55060bee-4697-4584-8ef9-bcf89abd0325": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030988229s Mar 18 13:15:20.925: INFO: Pod "pod-secrets-55060bee-4697-4584-8ef9-bcf89abd0325": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034911098s STEP: Saw pod success Mar 18 13:15:20.925: INFO: Pod "pod-secrets-55060bee-4697-4584-8ef9-bcf89abd0325" satisfied condition "success or failure" Mar 18 13:15:20.928: INFO: Trying to get logs from node iruya-worker pod pod-secrets-55060bee-4697-4584-8ef9-bcf89abd0325 container secret-volume-test: STEP: delete the pod Mar 18 13:15:20.944: INFO: Waiting for pod pod-secrets-55060bee-4697-4584-8ef9-bcf89abd0325 to disappear Mar 18 13:15:20.949: INFO: Pod pod-secrets-55060bee-4697-4584-8ef9-bcf89abd0325 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:15:20.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-523" for this suite. Mar 18 13:15:26.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:15:27.047: INFO: namespace secrets-523 deletion completed in 6.095158436s • [SLOW TEST:10.249 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:15:27.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-dd757d4a-7636-4ac5-b9dc-0a981a4e443b STEP: Creating a pod to test consume configMaps Mar 18 13:15:27.124: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-41b167d3-11fd-4ff0-a119-8bf93a4001ff" in namespace "projected-5612" to be "success or failure" Mar 18 13:15:27.142: INFO: Pod "pod-projected-configmaps-41b167d3-11fd-4ff0-a119-8bf93a4001ff": Phase="Pending", Reason="", readiness=false. Elapsed: 17.747837ms Mar 18 13:15:29.146: INFO: Pod "pod-projected-configmaps-41b167d3-11fd-4ff0-a119-8bf93a4001ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022188644s Mar 18 13:15:31.150: INFO: Pod "pod-projected-configmaps-41b167d3-11fd-4ff0-a119-8bf93a4001ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026575939s STEP: Saw pod success Mar 18 13:15:31.150: INFO: Pod "pod-projected-configmaps-41b167d3-11fd-4ff0-a119-8bf93a4001ff" satisfied condition "success or failure" Mar 18 13:15:31.154: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-41b167d3-11fd-4ff0-a119-8bf93a4001ff container projected-configmap-volume-test: STEP: delete the pod Mar 18 13:15:31.217: INFO: Waiting for pod pod-projected-configmaps-41b167d3-11fd-4ff0-a119-8bf93a4001ff to disappear Mar 18 13:15:31.224: INFO: Pod pod-projected-configmaps-41b167d3-11fd-4ff0-a119-8bf93a4001ff no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:15:31.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5612" for this suite. Mar 18 13:15:37.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:15:37.342: INFO: namespace projected-5612 deletion completed in 6.115041244s • [SLOW TEST:10.295 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:15:37.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Mar 18 13:15:37.415: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 18 13:15:37.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7454' Mar 18 13:15:37.690: INFO: stderr: "" Mar 18 13:15:37.690: INFO: stdout: "service/redis-slave created\n" Mar 18 13:15:37.691: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 18 13:15:37.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7454' Mar 18 13:15:37.986: INFO: stderr: "" Mar 18 13:15:37.986: INFO: stdout: "service/redis-master created\n" Mar 18 13:15:37.987: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 18 13:15:37.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7454' Mar 18 13:15:38.280: INFO: stderr: "" Mar 18 13:15:38.280: INFO: stdout: "service/frontend created\n" Mar 18 13:15:38.280: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 18 13:15:38.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7454' Mar 18 13:15:38.530: INFO: stderr: "" Mar 18 13:15:38.530: INFO: stdout: "deployment.apps/frontend created\n" Mar 18 13:15:38.530: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 18 13:15:38.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7454' Mar 18 13:15:38.786: INFO: stderr: "" Mar 18 13:15:38.786: INFO: stdout: "deployment.apps/redis-master created\n" Mar 18 13:15:38.786: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 18 13:15:38.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7454' Mar 18 13:15:39.059: INFO: stderr: "" Mar 18 13:15:39.059: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Mar 18 13:15:39.059: INFO: Waiting for all frontend pods to be Running. Mar 18 13:15:49.109: INFO: Waiting for frontend to serve content. Mar 18 13:15:49.126: INFO: Trying to add a new entry to the guestbook. Mar 18 13:15:49.144: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 18 13:15:49.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7454' Mar 18 13:15:49.333: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 13:15:49.333: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 18 13:15:49.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7454' Mar 18 13:15:49.474: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 13:15:49.474: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 18 13:15:49.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7454' Mar 18 13:15:49.588: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 13:15:49.588: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 18 13:15:49.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7454' Mar 18 13:15:49.707: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 13:15:49.707: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 18 13:15:49.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7454' Mar 18 13:15:49.805: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 13:15:49.805: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 18 13:15:49.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7454' Mar 18 13:15:49.896: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 13:15:49.896: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:15:49.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7454" for this suite. Mar 18 13:16:27.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:16:28.039: INFO: namespace kubectl-7454 deletion completed in 38.104335076s • [SLOW TEST:50.697 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:16:28.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 18 13:16:28.116: INFO: Waiting up to 5m0s for pod "downward-api-7c92774f-782a-45d3-9fba-060bfde5979d" in namespace "downward-api-2319" to be "success or failure" Mar 18 13:16:28.120: INFO: Pod "downward-api-7c92774f-782a-45d3-9fba-060bfde5979d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021352ms Mar 18 13:16:30.124: INFO: Pod "downward-api-7c92774f-782a-45d3-9fba-060bfde5979d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008590256s Mar 18 13:16:32.129: INFO: Pod "downward-api-7c92774f-782a-45d3-9fba-060bfde5979d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01341165s STEP: Saw pod success Mar 18 13:16:32.129: INFO: Pod "downward-api-7c92774f-782a-45d3-9fba-060bfde5979d" satisfied condition "success or failure" Mar 18 13:16:32.133: INFO: Trying to get logs from node iruya-worker pod downward-api-7c92774f-782a-45d3-9fba-060bfde5979d container dapi-container: STEP: delete the pod Mar 18 13:16:32.162: INFO: Waiting for pod downward-api-7c92774f-782a-45d3-9fba-060bfde5979d to disappear Mar 18 13:16:32.173: INFO: Pod downward-api-7c92774f-782a-45d3-9fba-060bfde5979d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:16:32.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2319" for this suite. Mar 18 13:16:38.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:16:38.269: INFO: namespace downward-api-2319 deletion completed in 6.092287794s • [SLOW TEST:10.230 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:16:38.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 13:16:38.350: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac7f2227-f86d-449f-8592-203d605d883b" in namespace "downward-api-4839" to be "success or failure" Mar 18 13:16:38.365: INFO: Pod "downwardapi-volume-ac7f2227-f86d-449f-8592-203d605d883b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.986966ms Mar 18 13:16:40.369: INFO: Pod "downwardapi-volume-ac7f2227-f86d-449f-8592-203d605d883b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019195629s Mar 18 13:16:42.373: INFO: Pod "downwardapi-volume-ac7f2227-f86d-449f-8592-203d605d883b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023262195s STEP: Saw pod success Mar 18 13:16:42.373: INFO: Pod "downwardapi-volume-ac7f2227-f86d-449f-8592-203d605d883b" satisfied condition "success or failure" Mar 18 13:16:42.376: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ac7f2227-f86d-449f-8592-203d605d883b container client-container: STEP: delete the pod Mar 18 13:16:42.403: INFO: Waiting for pod downwardapi-volume-ac7f2227-f86d-449f-8592-203d605d883b to disappear Mar 18 13:16:42.414: INFO: Pod downwardapi-volume-ac7f2227-f86d-449f-8592-203d605d883b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:16:42.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4839" for this suite. Mar 18 13:16:48.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:16:48.516: INFO: namespace downward-api-4839 deletion completed in 6.094769596s • [SLOW TEST:10.247 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:16:48.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-516642b8-6fd8-4ecc-855c-172b91f7f105 STEP: Creating a pod to test consume secrets Mar 18 13:16:48.608: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dd8bec4b-4484-4f5f-b181-73cd1feb9cf4" in namespace "projected-6501" to be "success or failure" Mar 18 13:16:48.618: INFO: Pod "pod-projected-secrets-dd8bec4b-4484-4f5f-b181-73cd1feb9cf4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.741946ms Mar 18 13:16:50.621: INFO: Pod "pod-projected-secrets-dd8bec4b-4484-4f5f-b181-73cd1feb9cf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0135821s Mar 18 13:16:52.625: INFO: Pod "pod-projected-secrets-dd8bec4b-4484-4f5f-b181-73cd1feb9cf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017234446s STEP: Saw pod success Mar 18 13:16:52.625: INFO: Pod "pod-projected-secrets-dd8bec4b-4484-4f5f-b181-73cd1feb9cf4" satisfied condition "success or failure" Mar 18 13:16:52.627: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-dd8bec4b-4484-4f5f-b181-73cd1feb9cf4 container projected-secret-volume-test: STEP: delete the pod Mar 18 13:16:52.656: INFO: Waiting for pod pod-projected-secrets-dd8bec4b-4484-4f5f-b181-73cd1feb9cf4 to disappear Mar 18 13:16:52.671: INFO: Pod pod-projected-secrets-dd8bec4b-4484-4f5f-b181-73cd1feb9cf4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:16:52.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6501" for this suite. Mar 18 13:16:58.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:16:58.795: INFO: namespace projected-6501 deletion completed in 6.116456685s • [SLOW TEST:10.278 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:16:58.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 13:16:58.909: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1a9dc89c-22a7-45af-9e51-0473b4dd8baa", Controller:(*bool)(0xc0024eda0a), BlockOwnerDeletion:(*bool)(0xc0024eda0b)}} Mar 18 13:16:58.923: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ff321f2b-e1dc-4187-97f7-a2e1478182a0", Controller:(*bool)(0xc0024e48d2), BlockOwnerDeletion:(*bool)(0xc0024e48d3)}} Mar 18 13:16:58.964: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"dd0fc78f-1663-452c-86ba-08215d66cdb9", Controller:(*bool)(0xc0024edb9a), BlockOwnerDeletion:(*bool)(0xc0024edb9b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:17:04.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5324" for this suite. Mar 18 13:17:10.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:17:10.104: INFO: namespace gc-5324 deletion completed in 6.093896178s • [SLOW TEST:11.309 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:17:10.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 13:17:10.188: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 8.102254ms) Mar 18 13:17:10.192: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.072388ms) Mar 18 13:17:10.195: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.347345ms) Mar 18 13:17:10.198: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.417344ms) Mar 18 13:17:10.202: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.074657ms) Mar 18 13:17:10.205: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.535659ms) Mar 18 13:17:10.208: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.829715ms) Mar 18 13:17:10.211: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.215285ms) Mar 18 13:17:10.214: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.133263ms) Mar 18 13:17:10.218: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.095377ms) Mar 18 13:17:10.221: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.289089ms) Mar 18 13:17:10.224: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.182229ms) Mar 18 13:17:10.227: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.350804ms) Mar 18 13:17:10.231: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.631601ms) Mar 18 13:17:10.235: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.386682ms) Mar 18 13:17:10.238: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.47269ms) Mar 18 13:17:10.242: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.701382ms) Mar 18 13:17:10.246: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.865911ms) Mar 18 13:17:10.249: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.493887ms) Mar 18 13:17:10.253: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.801271ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:17:10.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5086" for this suite. Mar 18 13:17:16.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:17:16.352: INFO: namespace proxy-5086 deletion completed in 6.095276581s • [SLOW TEST:6.248 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:17:16.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 13:17:16.398: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17da209d-e12f-468a-bad1-a05de77d4bab" in namespace "downward-api-6026" to be "success or failure" Mar 18 13:17:16.414: INFO: Pod "downwardapi-volume-17da209d-e12f-468a-bad1-a05de77d4bab": Phase="Pending", Reason="", readiness=false. Elapsed: 15.713204ms Mar 18 13:17:18.418: INFO: Pod "downwardapi-volume-17da209d-e12f-468a-bad1-a05de77d4bab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019812004s Mar 18 13:17:20.423: INFO: Pod "downwardapi-volume-17da209d-e12f-468a-bad1-a05de77d4bab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024123038s STEP: Saw pod success Mar 18 13:17:20.423: INFO: Pod "downwardapi-volume-17da209d-e12f-468a-bad1-a05de77d4bab" satisfied condition "success or failure" Mar 18 13:17:20.426: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-17da209d-e12f-468a-bad1-a05de77d4bab container client-container: STEP: delete the pod Mar 18 13:17:20.457: INFO: Waiting for pod downwardapi-volume-17da209d-e12f-468a-bad1-a05de77d4bab to disappear Mar 18 13:17:20.468: INFO: Pod downwardapi-volume-17da209d-e12f-468a-bad1-a05de77d4bab no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:17:20.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6026" for this suite. Mar 18 13:17:26.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:17:26.565: INFO: namespace downward-api-6026 deletion completed in 6.094777985s • [SLOW TEST:10.213 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:17:26.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 18 13:17:26.699: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-507,SelfLink:/api/v1/namespaces/watch-507/configmaps/e2e-watch-test-resource-version,UID:f479b632-bc25-45d9-a641-c89dabd5aee3,ResourceVersion:516925,Generation:0,CreationTimestamp:2020-03-18 13:17:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 13:17:26.699: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-507,SelfLink:/api/v1/namespaces/watch-507/configmaps/e2e-watch-test-resource-version,UID:f479b632-bc25-45d9-a641-c89dabd5aee3,ResourceVersion:516926,Generation:0,CreationTimestamp:2020-03-18 13:17:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:17:26.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-507" for this suite. Mar 18 13:17:32.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:17:32.789: INFO: namespace watch-507 deletion completed in 6.085104557s • [SLOW TEST:6.223 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:17:32.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-7511/secret-test-0340d934-f277-4310-8d13-e023f6ed032b STEP: Creating a pod to test consume secrets Mar 18 13:17:32.896: INFO: Waiting up to 5m0s for pod "pod-configmaps-9102ae5b-7f40-4e8c-8a9d-fd3132856d97" in namespace "secrets-7511" to be "success or failure" Mar 18 13:17:32.899: INFO: Pod "pod-configmaps-9102ae5b-7f40-4e8c-8a9d-fd3132856d97": Phase="Pending", Reason="", readiness=false. Elapsed: 3.163927ms Mar 18 13:17:34.905: INFO: Pod "pod-configmaps-9102ae5b-7f40-4e8c-8a9d-fd3132856d97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008816089s Mar 18 13:17:36.908: INFO: Pod "pod-configmaps-9102ae5b-7f40-4e8c-8a9d-fd3132856d97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011822669s STEP: Saw pod success Mar 18 13:17:36.908: INFO: Pod "pod-configmaps-9102ae5b-7f40-4e8c-8a9d-fd3132856d97" satisfied condition "success or failure" Mar 18 13:17:36.910: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-9102ae5b-7f40-4e8c-8a9d-fd3132856d97 container env-test: STEP: delete the pod Mar 18 13:17:36.970: INFO: Waiting for pod pod-configmaps-9102ae5b-7f40-4e8c-8a9d-fd3132856d97 to disappear Mar 18 13:17:36.973: INFO: Pod pod-configmaps-9102ae5b-7f40-4e8c-8a9d-fd3132856d97 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:17:36.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7511" for this suite. Mar 18 13:17:42.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:17:43.070: INFO: namespace secrets-7511 deletion completed in 6.09238407s • [SLOW TEST:10.280 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:17:43.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 18 13:17:43.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9008' Mar 18 13:17:43.206: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 13:17:43.206: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Mar 18 13:17:43.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-9008' Mar 18 13:17:43.322: INFO: stderr: "" Mar 18 13:17:43.322: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:17:43.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9008" for this suite. Mar 18 13:17:49.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:17:49.415: INFO: namespace kubectl-9008 deletion completed in 6.090657678s • [SLOW TEST:6.345 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:17:49.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 13:17:49.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 18 13:17:49.610: INFO: stderr: "" Mar 18 13:17:49.610: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.10\", GitCommit:\"1bea6c00a7055edef03f1d4bb58b773fa8917f11\", GitTreeState:\"clean\", BuildDate:\"2020-03-09T11:07:06Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:17:49.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3135" for this suite. Mar 18 13:17:55.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:17:55.734: INFO: namespace kubectl-3135 deletion completed in 6.096980204s • [SLOW TEST:6.319 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:17:55.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 18 13:17:55.779: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:18:02.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9769" for this suite. Mar 18 13:18:24.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:18:24.621: INFO: namespace init-container-9769 deletion completed in 22.100922431s • [SLOW TEST:28.886 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:18:24.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 13:18:24.693: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad6faea2-c908-4188-a107-05b1dcedece6" in namespace "projected-4416" to be "success or failure" Mar 18 13:18:24.697: INFO: Pod "downwardapi-volume-ad6faea2-c908-4188-a107-05b1dcedece6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.666815ms Mar 18 13:18:26.700: INFO: Pod "downwardapi-volume-ad6faea2-c908-4188-a107-05b1dcedece6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007084177s Mar 18 13:18:28.705: INFO: Pod "downwardapi-volume-ad6faea2-c908-4188-a107-05b1dcedece6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011781556s STEP: Saw pod success Mar 18 13:18:28.705: INFO: Pod "downwardapi-volume-ad6faea2-c908-4188-a107-05b1dcedece6" satisfied condition "success or failure" Mar 18 13:18:28.709: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ad6faea2-c908-4188-a107-05b1dcedece6 container client-container: STEP: delete the pod Mar 18 13:18:28.729: INFO: Waiting for pod downwardapi-volume-ad6faea2-c908-4188-a107-05b1dcedece6 to disappear Mar 18 13:18:28.733: INFO: Pod downwardapi-volume-ad6faea2-c908-4188-a107-05b1dcedece6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:18:28.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4416" for this suite. Mar 18 13:18:34.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:18:34.851: INFO: namespace projected-4416 deletion completed in 6.114598151s • [SLOW TEST:10.231 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:18:34.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 18 13:18:34.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5537' Mar 18 13:18:37.251: INFO: stderr: "" Mar 18 13:18:37.251: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 13:18:37.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5537' Mar 18 13:18:37.411: INFO: stderr: "" Mar 18 13:18:37.411: INFO: stdout: "update-demo-nautilus-bv9r6 update-demo-nautilus-gczkn " Mar 18 13:18:37.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bv9r6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5537' Mar 18 13:18:37.499: INFO: stderr: "" Mar 18 13:18:37.499: INFO: stdout: "" Mar 18 13:18:37.499: INFO: update-demo-nautilus-bv9r6 is created but not running Mar 18 13:18:42.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5537' Mar 18 13:18:42.602: INFO: stderr: "" Mar 18 13:18:42.602: INFO: stdout: "update-demo-nautilus-bv9r6 update-demo-nautilus-gczkn " Mar 18 13:18:42.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bv9r6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5537' Mar 18 13:18:42.689: INFO: stderr: "" Mar 18 13:18:42.689: INFO: stdout: "true" Mar 18 13:18:42.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bv9r6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5537' Mar 18 13:18:42.786: INFO: stderr: "" Mar 18 13:18:42.786: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 13:18:42.786: INFO: validating pod update-demo-nautilus-bv9r6 Mar 18 13:18:42.790: INFO: got data: { "image": "nautilus.jpg" } Mar 18 13:18:42.790: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 13:18:42.790: INFO: update-demo-nautilus-bv9r6 is verified up and running Mar 18 13:18:42.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gczkn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5537' Mar 18 13:18:42.900: INFO: stderr: "" Mar 18 13:18:42.900: INFO: stdout: "true" Mar 18 13:18:42.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gczkn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5537' Mar 18 13:18:42.993: INFO: stderr: "" Mar 18 13:18:42.993: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 13:18:42.993: INFO: validating pod update-demo-nautilus-gczkn Mar 18 13:18:42.996: INFO: got data: { "image": "nautilus.jpg" } Mar 18 13:18:42.997: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 13:18:42.997: INFO: update-demo-nautilus-gczkn is verified up and running STEP: using delete to clean up resources Mar 18 13:18:42.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5537' Mar 18 13:18:43.095: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 13:18:43.095: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 18 13:18:43.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5537' Mar 18 13:18:43.185: INFO: stderr: "No resources found.\n" Mar 18 13:18:43.185: INFO: stdout: "" Mar 18 13:18:43.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5537 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 13:18:43.281: INFO: stderr: "" Mar 18 13:18:43.281: INFO: stdout: "update-demo-nautilus-bv9r6\nupdate-demo-nautilus-gczkn\n" Mar 18 13:18:43.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5537' Mar 18 13:18:43.962: INFO: stderr: "No resources found.\n" Mar 18 13:18:43.962: INFO: stdout: "" Mar 18 13:18:43.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5537 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 13:18:44.053: INFO: stderr: "" Mar 18 13:18:44.053: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:18:44.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5537" for this suite. Mar 18 13:19:06.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:19:06.156: INFO: namespace kubectl-5537 deletion completed in 22.099313359s • [SLOW TEST:31.303 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:19:06.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 18 13:19:06.239: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:19:11.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9902" for this suite. Mar 18 13:19:17.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:19:17.750: INFO: namespace init-container-9902 deletion completed in 6.098546879s • [SLOW TEST:11.593 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:19:17.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-687 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 18 13:19:17.777: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 18 13:19:37.856: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.15:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-687 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 13:19:37.856: INFO: >>> kubeConfig: /root/.kube/config I0318 13:19:37.894564 6 log.go:172] (0xc002a2b550) (0xc0012e6e60) Create stream I0318 13:19:37.894591 6 log.go:172] (0xc002a2b550) (0xc0012e6e60) Stream added, broadcasting: 1 I0318 13:19:37.898742 6 log.go:172] (0xc002a2b550) Reply frame received for 1 I0318 13:19:37.898784 6 log.go:172] (0xc002a2b550) (0xc0012e7040) Create stream I0318 13:19:37.898803 6 log.go:172] (0xc002a2b550) (0xc0012e7040) Stream added, broadcasting: 3 I0318 13:19:37.901441 6 log.go:172] (0xc002a2b550) Reply frame received for 3 I0318 13:19:37.901481 6 log.go:172] (0xc002a2b550) (0xc002784780) Create stream I0318 13:19:37.901496 6 log.go:172] (0xc002a2b550) (0xc002784780) Stream added, broadcasting: 5 I0318 13:19:37.902364 6 log.go:172] (0xc002a2b550) Reply frame received for 5 I0318 13:19:37.980251 6 log.go:172] (0xc002a2b550) Data frame received for 3 I0318 13:19:37.980282 6 log.go:172] (0xc0012e7040) (3) Data frame handling I0318 13:19:37.980290 6 log.go:172] (0xc0012e7040) (3) Data frame sent I0318 13:19:37.980306 6 log.go:172] (0xc002a2b550) Data frame received for 3 I0318 13:19:37.980310 6 log.go:172] (0xc0012e7040) (3) Data frame handling I0318 13:19:37.980327 6 log.go:172] (0xc002a2b550) Data frame received for 5 I0318 13:19:37.980337 6 log.go:172] (0xc002784780) (5) Data frame handling I0318 13:19:37.981930 6 log.go:172] (0xc002a2b550) Data frame received for 1 I0318 13:19:37.981954 6 log.go:172] (0xc0012e6e60) (1) Data frame handling I0318 13:19:37.981964 6 log.go:172] (0xc0012e6e60) (1) Data frame sent I0318 13:19:37.981977 6 log.go:172] (0xc002a2b550) (0xc0012e6e60) Stream removed, broadcasting: 1 I0318 13:19:37.981990 6 log.go:172] (0xc002a2b550) Go away received I0318 13:19:37.982117 6 log.go:172] (0xc002a2b550) (0xc0012e6e60) Stream removed, broadcasting: 1 I0318 13:19:37.982135 6 log.go:172] (0xc002a2b550) (0xc0012e7040) Stream removed, broadcasting: 3 I0318 13:19:37.982144 6 log.go:172] (0xc002a2b550) (0xc002784780) Stream removed, broadcasting: 5 Mar 18 13:19:37.982: INFO: Found all expected endpoints: [netserver-0] Mar 18 13:19:37.984: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.8:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-687 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 13:19:37.984: INFO: >>> kubeConfig: /root/.kube/config I0318 13:19:38.012629 6 log.go:172] (0xc002a496b0) (0xc002319360) Create stream I0318 13:19:38.012662 6 log.go:172] (0xc002a496b0) (0xc002319360) Stream added, broadcasting: 1 I0318 13:19:38.015600 6 log.go:172] (0xc002a496b0) Reply frame received for 1 I0318 13:19:38.015639 6 log.go:172] (0xc002a496b0) (0xc002a9f040) Create stream I0318 13:19:38.015653 6 log.go:172] (0xc002a496b0) (0xc002a9f040) Stream added, broadcasting: 3 I0318 13:19:38.016462 6 log.go:172] (0xc002a496b0) Reply frame received for 3 I0318 13:19:38.016503 6 log.go:172] (0xc002a496b0) (0xc002a9f0e0) Create stream I0318 13:19:38.016517 6 log.go:172] (0xc002a496b0) (0xc002a9f0e0) Stream added, broadcasting: 5 I0318 13:19:38.017530 6 log.go:172] (0xc002a496b0) Reply frame received for 5 I0318 13:19:38.070721 6 log.go:172] (0xc002a496b0) Data frame received for 3 I0318 13:19:38.070764 6 log.go:172] (0xc002a9f040) (3) Data frame handling I0318 13:19:38.070809 6 log.go:172] (0xc002a9f040) (3) Data frame sent I0318 13:19:38.070829 6 log.go:172] (0xc002a496b0) Data frame received for 3 I0318 13:19:38.070868 6 log.go:172] (0xc002a9f040) (3) Data frame handling I0318 13:19:38.070897 6 log.go:172] (0xc002a496b0) Data frame received for 5 I0318 13:19:38.070921 6 log.go:172] (0xc002a9f0e0) (5) Data frame handling I0318 13:19:38.073001 6 log.go:172] (0xc002a496b0) Data frame received for 1 I0318 13:19:38.073025 6 log.go:172] (0xc002319360) (1) Data frame handling I0318 13:19:38.073038 6 log.go:172] (0xc002319360) (1) Data frame sent I0318 13:19:38.073059 6 log.go:172] (0xc002a496b0) (0xc002319360) Stream removed, broadcasting: 1 I0318 13:19:38.073090 6 log.go:172] (0xc002a496b0) Go away received I0318 13:19:38.073335 6 log.go:172] (0xc002a496b0) (0xc002319360) Stream removed, broadcasting: 1 I0318 13:19:38.073366 6 log.go:172] (0xc002a496b0) (0xc002a9f040) Stream removed, broadcasting: 3 I0318 13:19:38.073378 6 log.go:172] (0xc002a496b0) (0xc002a9f0e0) Stream removed, broadcasting: 5 Mar 18 13:19:38.073: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:19:38.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-687" for this suite. Mar 18 13:20:00.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:20:00.179: INFO: namespace pod-network-test-687 deletion completed in 22.101463706s • [SLOW TEST:42.429 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:20:00.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 18 13:20:04.781: INFO: Successfully updated pod "pod-update-667e8766-0d10-4b37-a15e-4a04bcccee1d" STEP: verifying the updated pod is in kubernetes Mar 18 13:20:04.790: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:20:04.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9206" for this suite. Mar 18 13:20:26.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:20:26.912: INFO: namespace pods-9206 deletion completed in 22.118581418s • [SLOW TEST:26.731 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:20:26.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 18 13:20:27.007: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9457,SelfLink:/api/v1/namespaces/watch-9457/configmaps/e2e-watch-test-label-changed,UID:edc6f98e-1301-4678-80af-a2fb9e1360d0,ResourceVersion:517582,Generation:0,CreationTimestamp:2020-03-18 13:20:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 13:20:27.007: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9457,SelfLink:/api/v1/namespaces/watch-9457/configmaps/e2e-watch-test-label-changed,UID:edc6f98e-1301-4678-80af-a2fb9e1360d0,ResourceVersion:517583,Generation:0,CreationTimestamp:2020-03-18 13:20:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 18 13:20:27.007: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9457,SelfLink:/api/v1/namespaces/watch-9457/configmaps/e2e-watch-test-label-changed,UID:edc6f98e-1301-4678-80af-a2fb9e1360d0,ResourceVersion:517584,Generation:0,CreationTimestamp:2020-03-18 13:20:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 18 13:20:37.043: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9457,SelfLink:/api/v1/namespaces/watch-9457/configmaps/e2e-watch-test-label-changed,UID:edc6f98e-1301-4678-80af-a2fb9e1360d0,ResourceVersion:517606,Generation:0,CreationTimestamp:2020-03-18 13:20:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 13:20:37.043: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9457,SelfLink:/api/v1/namespaces/watch-9457/configmaps/e2e-watch-test-label-changed,UID:edc6f98e-1301-4678-80af-a2fb9e1360d0,ResourceVersion:517607,Generation:0,CreationTimestamp:2020-03-18 13:20:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 18 13:20:37.043: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9457,SelfLink:/api/v1/namespaces/watch-9457/configmaps/e2e-watch-test-label-changed,UID:edc6f98e-1301-4678-80af-a2fb9e1360d0,ResourceVersion:517608,Generation:0,CreationTimestamp:2020-03-18 13:20:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:20:37.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9457" for this suite. Mar 18 13:20:43.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:20:43.142: INFO: namespace watch-9457 deletion completed in 6.094559082s • [SLOW TEST:16.229 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:20:43.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-h97hm in namespace proxy-7292 I0318 13:20:43.257770 6 runners.go:180] Created replication controller with name: proxy-service-h97hm, namespace: proxy-7292, replica count: 1 I0318 13:20:44.308304 6 runners.go:180] proxy-service-h97hm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 13:20:45.308511 6 runners.go:180] proxy-service-h97hm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 13:20:46.308752 6 runners.go:180] proxy-service-h97hm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0318 13:20:47.308946 6 runners.go:180] proxy-service-h97hm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0318 13:20:48.309261 6 runners.go:180] proxy-service-h97hm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0318 13:20:49.309550 6 runners.go:180] proxy-service-h97hm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0318 13:20:50.309777 6 runners.go:180] proxy-service-h97hm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0318 13:20:51.309988 6 runners.go:180] proxy-service-h97hm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0318 13:20:52.310208 6 runners.go:180] proxy-service-h97hm Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 18 13:20:52.313: INFO: setup took 9.09091377s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 18 13:20:52.320: INFO: (0) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 6.206475ms) Mar 18 13:20:52.320: INFO: (0) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 6.223706ms) Mar 18 13:20:52.320: INFO: (0) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 6.30572ms) Mar 18 13:20:52.320: INFO: (0) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7/proxy/: test (200; 6.454915ms) Mar 18 13:20:52.321: INFO: (0) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 7.4402ms) Mar 18 13:20:52.322: INFO: (0) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:1080/proxy/: test<... (200; 8.148885ms) Mar 18 13:20:52.322: INFO: (0) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 8.630278ms) Mar 18 13:20:52.323: INFO: (0) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname2/proxy/: bar (200; 9.595696ms) Mar 18 13:20:52.323: INFO: (0) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 9.589594ms) Mar 18 13:20:52.324: INFO: (0) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname2/proxy/: bar (200; 10.119856ms) Mar 18 13:20:52.324: INFO: (0) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname1/proxy/: foo (200; 9.974216ms) Mar 18 13:20:52.327: INFO: (0) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:462/proxy/: tls qux (200; 13.835014ms) Mar 18 13:20:52.327: INFO: (0) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: test (200; 3.941157ms) Mar 18 13:20:52.334: INFO: (1) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 4.984292ms) Mar 18 13:20:52.335: INFO: (1) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:1080/proxy/: test<... (200; 5.063479ms) Mar 18 13:20:52.335: INFO: (1) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 4.936077ms) Mar 18 13:20:52.335: INFO: (1) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: test (200; 4.12427ms) Mar 18 13:20:52.342: INFO: (2) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:462/proxy/: tls qux (200; 4.193206ms) Mar 18 13:20:52.342: INFO: (2) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname2/proxy/: tls qux (200; 4.392211ms) Mar 18 13:20:52.342: INFO: (2) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:1080/proxy/: test<... (200; 4.529267ms) Mar 18 13:20:52.342: INFO: (2) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:460/proxy/: tls baz (200; 4.507397ms) Mar 18 13:20:52.342: INFO: (2) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 4.692621ms) Mar 18 13:20:52.342: INFO: (2) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 4.682749ms) Mar 18 13:20:52.342: INFO: (2) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname2/proxy/: bar (200; 4.766327ms) Mar 18 13:20:52.342: INFO: (2) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname1/proxy/: foo (200; 4.913926ms) Mar 18 13:20:52.342: INFO: (2) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname1/proxy/: tls baz (200; 4.82815ms) Mar 18 13:20:52.342: INFO: (2) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 5.072429ms) Mar 18 13:20:52.343: INFO: (2) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 5.203127ms) Mar 18 13:20:52.343: INFO: (2) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 5.178114ms) Mar 18 13:20:52.343: INFO: (2) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname2/proxy/: bar (200; 5.312607ms) Mar 18 13:20:52.349: INFO: (3) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: test (200; 6.701309ms) Mar 18 13:20:52.350: INFO: (3) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 6.676569ms) Mar 18 13:20:52.350: INFO: (3) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 6.785586ms) Mar 18 13:20:52.350: INFO: (3) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 6.940965ms) Mar 18 13:20:52.350: INFO: (3) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 7.498098ms) Mar 18 13:20:52.350: INFO: (3) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 7.53053ms) Mar 18 13:20:52.350: INFO: (3) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:1080/proxy/: test<... (200; 7.600976ms) Mar 18 13:20:52.351: INFO: (3) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname2/proxy/: bar (200; 8.462364ms) Mar 18 13:20:52.351: INFO: (3) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname1/proxy/: foo (200; 8.586734ms) Mar 18 13:20:52.352: INFO: (3) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname2/proxy/: tls qux (200; 8.69501ms) Mar 18 13:20:52.352: INFO: (3) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname1/proxy/: tls baz (200; 8.73659ms) Mar 18 13:20:52.352: INFO: (3) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname2/proxy/: bar (200; 8.796795ms) Mar 18 13:20:52.352: INFO: (3) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 8.806429ms) Mar 18 13:20:52.352: INFO: (3) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:462/proxy/: tls qux (200; 8.778701ms) Mar 18 13:20:52.355: INFO: (4) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 3.464548ms) Mar 18 13:20:52.355: INFO: (4) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7/proxy/: test (200; 3.596989ms) Mar 18 13:20:52.355: INFO: (4) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:460/proxy/: tls baz (200; 3.627854ms) Mar 18 13:20:52.356: INFO: (4) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 4.27959ms) Mar 18 13:20:52.356: INFO: (4) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 4.595291ms) Mar 18 13:20:52.356: INFO: (4) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 4.651512ms) Mar 18 13:20:52.357: INFO: (4) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname1/proxy/: tls baz (200; 4.870148ms) Mar 18 13:20:52.357: INFO: (4) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 5.333818ms) Mar 18 13:20:52.357: INFO: (4) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname2/proxy/: bar (200; 5.212401ms) Mar 18 13:20:52.357: INFO: (4) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 5.263846ms) Mar 18 13:20:52.357: INFO: (4) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname2/proxy/: tls qux (200; 5.328235ms) Mar 18 13:20:52.357: INFO: (4) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: test<... (200; 5.354157ms) Mar 18 13:20:52.357: INFO: (4) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname2/proxy/: bar (200; 5.407661ms) Mar 18 13:20:52.357: INFO: (4) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname1/proxy/: foo (200; 5.425155ms) Mar 18 13:20:52.357: INFO: (4) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:462/proxy/: tls qux (200; 5.44877ms) Mar 18 13:20:52.361: INFO: (5) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:460/proxy/: tls baz (200; 3.831447ms) Mar 18 13:20:52.362: INFO: (5) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 4.581117ms) Mar 18 13:20:52.362: INFO: (5) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:1080/proxy/: test<... (200; 4.512591ms) Mar 18 13:20:52.362: INFO: (5) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname2/proxy/: bar (200; 4.760028ms) Mar 18 13:20:52.362: INFO: (5) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname2/proxy/: bar (200; 4.768366ms) Mar 18 13:20:52.362: INFO: (5) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 4.819533ms) Mar 18 13:20:52.362: INFO: (5) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 4.847936ms) Mar 18 13:20:52.362: INFO: (5) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 5.031219ms) Mar 18 13:20:52.362: INFO: (5) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: test (200; 5.137075ms) Mar 18 13:20:52.363: INFO: (5) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 5.19771ms) Mar 18 13:20:52.363: INFO: (5) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 5.385008ms) Mar 18 13:20:52.363: INFO: (5) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname1/proxy/: foo (200; 5.515732ms) Mar 18 13:20:52.363: INFO: (5) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:462/proxy/: tls qux (200; 5.792727ms) Mar 18 13:20:52.363: INFO: (5) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname2/proxy/: tls qux (200; 5.702156ms) Mar 18 13:20:52.364: INFO: (5) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname1/proxy/: tls baz (200; 6.74613ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 6.452902ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7/proxy/: test (200; 6.525457ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname2/proxy/: tls qux (200; 6.528516ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname2/proxy/: bar (200; 6.599835ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname1/proxy/: foo (200; 6.506503ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 6.596706ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:462/proxy/: tls qux (200; 6.674505ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: ... (200; 6.621564ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname2/proxy/: bar (200; 6.594511ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 6.721992ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname1/proxy/: tls baz (200; 6.697397ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:1080/proxy/: test<... (200; 6.582504ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 6.726733ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 6.755236ms) Mar 18 13:20:52.371: INFO: (6) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:460/proxy/: tls baz (200; 6.771636ms) Mar 18 13:20:52.375: INFO: (7) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: test (200; 3.945725ms) Mar 18 13:20:52.375: INFO: (7) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 4.226341ms) Mar 18 13:20:52.375: INFO: (7) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname1/proxy/: tls baz (200; 4.243443ms) Mar 18 13:20:52.375: INFO: (7) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname2/proxy/: bar (200; 4.271869ms) Mar 18 13:20:52.375: INFO: (7) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 4.228565ms) Mar 18 13:20:52.375: INFO: (7) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:460/proxy/: tls baz (200; 4.296559ms) Mar 18 13:20:52.375: INFO: (7) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:1080/proxy/: test<... (200; 4.287129ms) Mar 18 13:20:52.376: INFO: (7) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:462/proxy/: tls qux (200; 4.449873ms) Mar 18 13:20:52.376: INFO: (7) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 4.429078ms) Mar 18 13:20:52.377: INFO: (7) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 5.926139ms) Mar 18 13:20:52.377: INFO: (7) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname1/proxy/: foo (200; 6.014264ms) Mar 18 13:20:52.377: INFO: (7) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname2/proxy/: tls qux (200; 5.95229ms) Mar 18 13:20:52.377: INFO: (7) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname2/proxy/: bar (200; 5.993021ms) Mar 18 13:20:52.380: INFO: (8) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:462/proxy/: tls qux (200; 3.032146ms) Mar 18 13:20:52.381: INFO: (8) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:1080/proxy/: test<... (200; 3.545375ms) Mar 18 13:20:52.381: INFO: (8) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 3.780575ms) Mar 18 13:20:52.381: INFO: (8) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7/proxy/: test (200; 3.813629ms) Mar 18 13:20:52.381: INFO: (8) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 3.8377ms) Mar 18 13:20:52.382: INFO: (8) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 4.141053ms) Mar 18 13:20:52.382: INFO: (8) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:460/proxy/: tls baz (200; 4.382932ms) Mar 18 13:20:52.382: INFO: (8) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 4.382498ms) Mar 18 13:20:52.382: INFO: (8) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname2/proxy/: bar (200; 4.532404ms) Mar 18 13:20:52.382: INFO: (8) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname1/proxy/: tls baz (200; 4.502808ms) Mar 18 13:20:52.382: INFO: (8) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname2/proxy/: tls qux (200; 4.537096ms) Mar 18 13:20:52.382: INFO: (8) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 4.414783ms) Mar 18 13:20:52.382: INFO: (8) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname1/proxy/: foo (200; 4.697189ms) Mar 18 13:20:52.382: INFO: (8) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 4.706587ms) Mar 18 13:20:52.382: INFO: (8) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: test (200; 3.746846ms) Mar 18 13:20:52.388: INFO: (9) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 5.578092ms) Mar 18 13:20:52.388: INFO: (9) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname2/proxy/: bar (200; 5.503856ms) Mar 18 13:20:52.388: INFO: (9) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 5.759762ms) Mar 18 13:20:52.388: INFO: (9) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname1/proxy/: tls baz (200; 5.616812ms) Mar 18 13:20:52.388: INFO: (9) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 5.666225ms) Mar 18 13:20:52.388: INFO: (9) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 5.729529ms) Mar 18 13:20:52.388: INFO: (9) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: test<... (200; 5.726983ms) Mar 18 13:20:52.388: INFO: (9) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:462/proxy/: tls qux (200; 5.778439ms) Mar 18 13:20:52.388: INFO: (9) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname2/proxy/: bar (200; 5.881993ms) Mar 18 13:20:52.388: INFO: (9) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 5.980816ms) Mar 18 13:20:52.392: INFO: (10) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 3.179207ms) Mar 18 13:20:52.392: INFO: (10) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:1080/proxy/: test<... (200; 3.436094ms) Mar 18 13:20:52.392: INFO: (10) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:460/proxy/: tls baz (200; 3.500972ms) Mar 18 13:20:52.392: INFO: (10) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 3.491166ms) Mar 18 13:20:52.392: INFO: (10) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:462/proxy/: tls qux (200; 3.570815ms) Mar 18 13:20:52.393: INFO: (10) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 4.424658ms) Mar 18 13:20:52.393: INFO: (10) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 4.387997ms) Mar 18 13:20:52.393: INFO: (10) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7/proxy/: test (200; 4.431211ms) Mar 18 13:20:52.393: INFO: (10) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 4.422158ms) Mar 18 13:20:52.393: INFO: (10) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname1/proxy/: foo (200; 4.555698ms) Mar 18 13:20:52.393: INFO: (10) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname1/proxy/: tls baz (200; 4.761038ms) Mar 18 13:20:52.393: INFO: (10) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname2/proxy/: bar (200; 4.88453ms) Mar 18 13:20:52.393: INFO: (10) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname2/proxy/: bar (200; 4.896274ms) Mar 18 13:20:52.393: INFO: (10) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname2/proxy/: tls qux (200; 5.025711ms) Mar 18 13:20:52.394: INFO: (10) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 5.179405ms) Mar 18 13:20:52.394: INFO: (10) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: test<... (200; 3.612432ms) Mar 18 13:20:52.397: INFO: (11) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 3.622674ms) Mar 18 13:20:52.397: INFO: (11) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 3.702572ms) Mar 18 13:20:52.397: INFO: (11) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7/proxy/: test (200; 3.716504ms) Mar 18 13:20:52.398: INFO: (11) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname2/proxy/: tls qux (200; 3.748195ms) Mar 18 13:20:52.398: INFO: (11) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:460/proxy/: tls baz (200; 4.696577ms) Mar 18 13:20:52.398: INFO: (11) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 4.704822ms) Mar 18 13:20:52.399: INFO: (11) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname1/proxy/: tls baz (200; 4.767606ms) Mar 18 13:20:52.399: INFO: (11) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: ... (200; 2.68325ms) Mar 18 13:20:52.402: INFO: (12) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 2.768131ms) Mar 18 13:20:52.403: INFO: (12) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 3.777382ms) Mar 18 13:20:52.403: INFO: (12) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: test<... (200; 4.51465ms) Mar 18 13:20:52.404: INFO: (12) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 4.683363ms) Mar 18 13:20:52.404: INFO: (12) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 4.734274ms) Mar 18 13:20:52.404: INFO: (12) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname2/proxy/: bar (200; 4.77168ms) Mar 18 13:20:52.404: INFO: (12) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname2/proxy/: bar (200; 4.742646ms) Mar 18 13:20:52.404: INFO: (12) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname2/proxy/: tls qux (200; 4.749758ms) Mar 18 13:20:52.404: INFO: (12) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7/proxy/: test (200; 4.811821ms) Mar 18 13:20:52.404: INFO: (12) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname1/proxy/: tls baz (200; 4.805512ms) Mar 18 13:20:52.404: INFO: (12) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:460/proxy/: tls baz (200; 4.803243ms) Mar 18 13:20:52.404: INFO: (12) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname1/proxy/: foo (200; 4.884433ms) Mar 18 13:20:52.407: INFO: (13) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 2.968221ms) Mar 18 13:20:52.407: INFO: (13) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7/proxy/: test (200; 3.239252ms) Mar 18 13:20:52.407: INFO: (13) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:1080/proxy/: test<... (200; 3.291024ms) Mar 18 13:20:52.407: INFO: (13) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 3.370881ms) Mar 18 13:20:52.407: INFO: (13) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 3.288356ms) Mar 18 13:20:52.407: INFO: (13) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:460/proxy/: tls baz (200; 3.389988ms) Mar 18 13:20:52.407: INFO: (13) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: ... (200; 4.523686ms) Mar 18 13:20:52.414: INFO: (14) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 4.518543ms) Mar 18 13:20:52.414: INFO: (14) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:1080/proxy/: test<... (200; 4.5424ms) Mar 18 13:20:52.414: INFO: (14) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 4.565927ms) Mar 18 13:20:52.414: INFO: (14) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7/proxy/: test (200; 4.612308ms) Mar 18 13:20:52.414: INFO: (14) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 4.66933ms) Mar 18 13:20:52.414: INFO: (14) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:462/proxy/: tls qux (200; 4.632149ms) Mar 18 13:20:52.414: INFO: (14) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:460/proxy/: tls baz (200; 4.678006ms) Mar 18 13:20:52.414: INFO: (14) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: ... (200; 3.578402ms) Mar 18 13:20:52.419: INFO: (15) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname2/proxy/: bar (200; 3.673944ms) Mar 18 13:20:52.419: INFO: (15) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 3.68399ms) Mar 18 13:20:52.420: INFO: (15) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname1/proxy/: foo (200; 4.125159ms) Mar 18 13:20:52.420: INFO: (15) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: test<... (200; 4.801971ms) Mar 18 13:20:52.420: INFO: (15) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname2/proxy/: bar (200; 4.779568ms) Mar 18 13:20:52.420: INFO: (15) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname1/proxy/: tls baz (200; 4.789205ms) Mar 18 13:20:52.420: INFO: (15) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7/proxy/: test (200; 4.712994ms) Mar 18 13:20:52.420: INFO: (15) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 4.779378ms) Mar 18 13:20:52.420: INFO: (15) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 4.865328ms) Mar 18 13:20:52.423: INFO: (16) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:462/proxy/: tls qux (200; 2.633691ms) Mar 18 13:20:52.424: INFO: (16) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 3.171895ms) Mar 18 13:20:52.424: INFO: (16) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 3.26549ms) Mar 18 13:20:52.424: INFO: (16) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 3.234994ms) Mar 18 13:20:52.424: INFO: (16) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7/proxy/: test (200; 3.318707ms) Mar 18 13:20:52.424: INFO: (16) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 3.309487ms) Mar 18 13:20:52.424: INFO: (16) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:1080/proxy/: test<... (200; 3.38539ms) Mar 18 13:20:52.424: INFO: (16) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 3.309536ms) Mar 18 13:20:52.424: INFO: (16) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:460/proxy/: tls baz (200; 3.324067ms) Mar 18 13:20:52.424: INFO: (16) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: test<... (200; 3.120617ms) Mar 18 13:20:52.429: INFO: (17) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 3.104358ms) Mar 18 13:20:52.429: INFO: (17) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7/proxy/: test (200; 3.210219ms) Mar 18 13:20:52.429: INFO: (17) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:462/proxy/: tls qux (200; 3.442866ms) Mar 18 13:20:52.429: INFO: (17) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 3.58846ms) Mar 18 13:20:52.429: INFO: (17) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: ... (200; 4.471627ms) Mar 18 13:20:52.435: INFO: (18) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 4.533098ms) Mar 18 13:20:52.435: INFO: (18) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: test (200; 4.639487ms) Mar 18 13:20:52.435: INFO: (18) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:1080/proxy/: test<... (200; 4.655436ms) Mar 18 13:20:52.435: INFO: (18) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 4.708507ms) Mar 18 13:20:52.435: INFO: (18) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname1/proxy/: foo (200; 4.672045ms) Mar 18 13:20:52.436: INFO: (18) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname1/proxy/: tls baz (200; 4.885258ms) Mar 18 13:20:52.436: INFO: (18) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname2/proxy/: bar (200; 4.899117ms) Mar 18 13:20:52.436: INFO: (18) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname2/proxy/: tls qux (200; 4.912031ms) Mar 18 13:20:52.436: INFO: (18) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname2/proxy/: bar (200; 5.010711ms) Mar 18 13:20:52.439: INFO: (19) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 3.73072ms) Mar 18 13:20:52.440: INFO: (19) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:1080/proxy/: test<... (200; 3.952892ms) Mar 18 13:20:52.440: INFO: (19) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 3.947485ms) Mar 18 13:20:52.440: INFO: (19) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:443/proxy/: test (200; 4.082376ms) Mar 18 13:20:52.440: INFO: (19) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:160/proxy/: foo (200; 4.055217ms) Mar 18 13:20:52.440: INFO: (19) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:460/proxy/: tls baz (200; 4.035347ms) Mar 18 13:20:52.440: INFO: (19) /api/v1/namespaces/proxy-7292/pods/proxy-service-h97hm-g4jp7:162/proxy/: bar (200; 4.061304ms) Mar 18 13:20:52.440: INFO: (19) /api/v1/namespaces/proxy-7292/pods/https:proxy-service-h97hm-g4jp7:462/proxy/: tls qux (200; 4.052088ms) Mar 18 13:20:52.440: INFO: (19) /api/v1/namespaces/proxy-7292/pods/http:proxy-service-h97hm-g4jp7:1080/proxy/: ... (200; 4.091887ms) Mar 18 13:20:52.440: INFO: (19) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname2/proxy/: bar (200; 4.570651ms) Mar 18 13:20:52.440: INFO: (19) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname2/proxy/: bar (200; 4.706928ms) Mar 18 13:20:52.440: INFO: (19) /api/v1/namespaces/proxy-7292/services/http:proxy-service-h97hm:portname1/proxy/: foo (200; 4.762338ms) Mar 18 13:20:52.441: INFO: (19) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname2/proxy/: tls qux (200; 4.745526ms) Mar 18 13:20:52.441: INFO: (19) /api/v1/namespaces/proxy-7292/services/proxy-service-h97hm:portname1/proxy/: foo (200; 4.763435ms) Mar 18 13:20:52.441: INFO: (19) /api/v1/namespaces/proxy-7292/services/https:proxy-service-h97hm:tlsportname1/proxy/: tls baz (200; 4.929627ms) STEP: deleting ReplicationController proxy-service-h97hm in namespace proxy-7292, will wait for the garbage collector to delete the pods Mar 18 13:20:52.498: INFO: Deleting ReplicationController proxy-service-h97hm took: 5.290879ms Mar 18 13:20:52.798: INFO: Terminating ReplicationController proxy-service-h97hm pods took: 300.250683ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:21:01.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7292" for this suite. Mar 18 13:21:07.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:21:08.023: INFO: namespace proxy-7292 deletion completed in 6.119468768s • [SLOW TEST:24.878 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:21:08.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:21:13.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1307" for this suite. Mar 18 13:21:35.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:21:35.231: INFO: namespace replication-controller-1307 deletion completed in 22.091092063s • [SLOW TEST:27.207 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:21:35.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-907c4f3a-b798-48df-a9d5-f97dea976d7f in namespace container-probe-5764 Mar 18 13:21:39.324: INFO: Started pod liveness-907c4f3a-b798-48df-a9d5-f97dea976d7f in namespace container-probe-5764 STEP: checking the pod's current state and verifying that restartCount is present Mar 18 13:21:39.327: INFO: Initial restart count of pod liveness-907c4f3a-b798-48df-a9d5-f97dea976d7f is 0 Mar 18 13:21:57.373: INFO: Restart count of pod container-probe-5764/liveness-907c4f3a-b798-48df-a9d5-f97dea976d7f is now 1 (18.04619947s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:21:57.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5764" for this suite. Mar 18 13:22:03.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:22:03.510: INFO: namespace container-probe-5764 deletion completed in 6.096414838s • [SLOW TEST:28.278 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:22:03.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-7944/configmap-test-98a7bd64-51f8-476e-bf7c-418fa216016e STEP: Creating a pod to test consume configMaps Mar 18 13:22:03.584: INFO: Waiting up to 5m0s for pod "pod-configmaps-85d01109-eab2-45a3-8d59-8732c2a0e31e" in namespace "configmap-7944" to be "success or failure" Mar 18 13:22:03.588: INFO: Pod "pod-configmaps-85d01109-eab2-45a3-8d59-8732c2a0e31e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.026654ms Mar 18 13:22:05.609: INFO: Pod "pod-configmaps-85d01109-eab2-45a3-8d59-8732c2a0e31e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024981191s Mar 18 13:22:07.614: INFO: Pod "pod-configmaps-85d01109-eab2-45a3-8d59-8732c2a0e31e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029284245s STEP: Saw pod success Mar 18 13:22:07.614: INFO: Pod "pod-configmaps-85d01109-eab2-45a3-8d59-8732c2a0e31e" satisfied condition "success or failure" Mar 18 13:22:07.617: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-85d01109-eab2-45a3-8d59-8732c2a0e31e container env-test: STEP: delete the pod Mar 18 13:22:07.638: INFO: Waiting for pod pod-configmaps-85d01109-eab2-45a3-8d59-8732c2a0e31e to disappear Mar 18 13:22:07.642: INFO: Pod pod-configmaps-85d01109-eab2-45a3-8d59-8732c2a0e31e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:22:07.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7944" for this suite. Mar 18 13:22:13.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:22:13.742: INFO: namespace configmap-7944 deletion completed in 6.096391297s • [SLOW TEST:10.232 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:22:13.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:22:17.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6042" for this suite. Mar 18 13:23:03.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:23:03.979: INFO: namespace kubelet-test-6042 deletion completed in 46.094042795s • [SLOW TEST:50.237 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:23:03.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-wc9q STEP: Creating a pod to test atomic-volume-subpath Mar 18 13:23:04.050: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wc9q" in namespace "subpath-2647" to be "success or failure" Mar 18 13:23:04.054: INFO: Pod "pod-subpath-test-configmap-wc9q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.221056ms Mar 18 13:23:06.077: INFO: Pod "pod-subpath-test-configmap-wc9q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027056348s Mar 18 13:23:08.082: INFO: Pod "pod-subpath-test-configmap-wc9q": Phase="Running", Reason="", readiness=true. Elapsed: 4.031375562s Mar 18 13:23:10.095: INFO: Pod "pod-subpath-test-configmap-wc9q": Phase="Running", Reason="", readiness=true. Elapsed: 6.044930124s Mar 18 13:23:12.098: INFO: Pod "pod-subpath-test-configmap-wc9q": Phase="Running", Reason="", readiness=true. Elapsed: 8.048361488s Mar 18 13:23:14.107: INFO: Pod "pod-subpath-test-configmap-wc9q": Phase="Running", Reason="", readiness=true. Elapsed: 10.056724991s Mar 18 13:23:16.113: INFO: Pod "pod-subpath-test-configmap-wc9q": Phase="Running", Reason="", readiness=true. Elapsed: 12.062840274s Mar 18 13:23:18.117: INFO: Pod "pod-subpath-test-configmap-wc9q": Phase="Running", Reason="", readiness=true. Elapsed: 14.067117778s Mar 18 13:23:20.121: INFO: Pod "pod-subpath-test-configmap-wc9q": Phase="Running", Reason="", readiness=true. Elapsed: 16.071156353s Mar 18 13:23:22.124: INFO: Pod "pod-subpath-test-configmap-wc9q": Phase="Running", Reason="", readiness=true. Elapsed: 18.074090375s Mar 18 13:23:24.128: INFO: Pod "pod-subpath-test-configmap-wc9q": Phase="Running", Reason="", readiness=true. Elapsed: 20.077967397s Mar 18 13:23:26.132: INFO: Pod "pod-subpath-test-configmap-wc9q": Phase="Running", Reason="", readiness=true. Elapsed: 22.081665568s Mar 18 13:23:28.135: INFO: Pod "pod-subpath-test-configmap-wc9q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.085227655s STEP: Saw pod success Mar 18 13:23:28.135: INFO: Pod "pod-subpath-test-configmap-wc9q" satisfied condition "success or failure" Mar 18 13:23:28.138: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-wc9q container test-container-subpath-configmap-wc9q: STEP: delete the pod Mar 18 13:23:28.155: INFO: Waiting for pod pod-subpath-test-configmap-wc9q to disappear Mar 18 13:23:28.158: INFO: Pod pod-subpath-test-configmap-wc9q no longer exists STEP: Deleting pod pod-subpath-test-configmap-wc9q Mar 18 13:23:28.158: INFO: Deleting pod "pod-subpath-test-configmap-wc9q" in namespace "subpath-2647" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:23:28.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2647" for this suite. Mar 18 13:23:34.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:23:34.266: INFO: namespace subpath-2647 deletion completed in 6.103561296s • [SLOW TEST:30.287 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:23:34.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 18 13:23:34.345: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:23:51.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1302" for this suite. Mar 18 13:23:57.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:23:57.975: INFO: namespace pods-1302 deletion completed in 6.100292549s • [SLOW TEST:23.708 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:23:57.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-729/configmap-test-509502fd-0222-475f-b006-5d87e1ee4816 STEP: Creating a pod to test consume configMaps Mar 18 13:23:58.072: INFO: Waiting up to 5m0s for pod "pod-configmaps-fa54dd92-b5a9-4018-9ceb-809d6a5ff571" in namespace "configmap-729" to be "success or failure" Mar 18 13:23:58.091: INFO: Pod "pod-configmaps-fa54dd92-b5a9-4018-9ceb-809d6a5ff571": Phase="Pending", Reason="", readiness=false. Elapsed: 18.792138ms Mar 18 13:24:00.095: INFO: Pod "pod-configmaps-fa54dd92-b5a9-4018-9ceb-809d6a5ff571": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022640218s Mar 18 13:24:02.099: INFO: Pod "pod-configmaps-fa54dd92-b5a9-4018-9ceb-809d6a5ff571": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027095226s STEP: Saw pod success Mar 18 13:24:02.099: INFO: Pod "pod-configmaps-fa54dd92-b5a9-4018-9ceb-809d6a5ff571" satisfied condition "success or failure" Mar 18 13:24:02.102: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-fa54dd92-b5a9-4018-9ceb-809d6a5ff571 container env-test: STEP: delete the pod Mar 18 13:24:02.119: INFO: Waiting for pod pod-configmaps-fa54dd92-b5a9-4018-9ceb-809d6a5ff571 to disappear Mar 18 13:24:02.139: INFO: Pod pod-configmaps-fa54dd92-b5a9-4018-9ceb-809d6a5ff571 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:24:02.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-729" for this suite. Mar 18 13:24:08.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:24:08.244: INFO: namespace configmap-729 deletion completed in 6.089364632s • [SLOW TEST:10.268 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:24:08.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-2152 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2152 STEP: Deleting pre-stop pod Mar 18 13:24:21.359: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:24:21.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2152" for this suite. Mar 18 13:24:59.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:24:59.465: INFO: namespace prestop-2152 deletion completed in 38.094635472s • [SLOW TEST:51.221 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:24:59.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 18 13:24:59.527: INFO: Waiting up to 5m0s for pod "pod-0fd83c33-5938-4d1d-875f-031cf284a74f" in namespace "emptydir-8295" to be "success or failure" Mar 18 13:24:59.532: INFO: Pod "pod-0fd83c33-5938-4d1d-875f-031cf284a74f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1429ms Mar 18 13:25:01.535: INFO: Pod "pod-0fd83c33-5938-4d1d-875f-031cf284a74f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007362198s Mar 18 13:25:03.538: INFO: Pod "pod-0fd83c33-5938-4d1d-875f-031cf284a74f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010078091s STEP: Saw pod success Mar 18 13:25:03.538: INFO: Pod "pod-0fd83c33-5938-4d1d-875f-031cf284a74f" satisfied condition "success or failure" Mar 18 13:25:03.540: INFO: Trying to get logs from node iruya-worker2 pod pod-0fd83c33-5938-4d1d-875f-031cf284a74f container test-container: STEP: delete the pod Mar 18 13:25:03.570: INFO: Waiting for pod pod-0fd83c33-5938-4d1d-875f-031cf284a74f to disappear Mar 18 13:25:03.585: INFO: Pod pod-0fd83c33-5938-4d1d-875f-031cf284a74f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:25:03.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8295" for this suite. Mar 18 13:25:09.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:25:09.672: INFO: namespace emptydir-8295 deletion completed in 6.083750127s • [SLOW TEST:10.206 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:25:09.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-be098a02-6fc4-4536-b7ed-5bdb28938e2b [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:25:09.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5846" for this suite. Mar 18 13:25:15.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:25:15.837: INFO: namespace configmap-5846 deletion completed in 6.087201239s • [SLOW TEST:6.165 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:25:15.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Mar 18 13:25:15.881: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix732269407/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:25:15.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2599" for this suite. Mar 18 13:25:21.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:25:22.057: INFO: namespace kubectl-2599 deletion completed in 6.091849369s • [SLOW TEST:6.220 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:25:22.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 13:25:22.125: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ad1a597-495f-4b25-8e4f-9992f4340cc0" in namespace "downward-api-7498" to be "success or failure" Mar 18 13:25:22.132: INFO: Pod "downwardapi-volume-6ad1a597-495f-4b25-8e4f-9992f4340cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.725721ms Mar 18 13:25:24.137: INFO: Pod "downwardapi-volume-6ad1a597-495f-4b25-8e4f-9992f4340cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011389462s Mar 18 13:25:26.140: INFO: Pod "downwardapi-volume-6ad1a597-495f-4b25-8e4f-9992f4340cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01500949s Mar 18 13:25:28.144: INFO: Pod "downwardapi-volume-6ad1a597-495f-4b25-8e4f-9992f4340cc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018829788s STEP: Saw pod success Mar 18 13:25:28.144: INFO: Pod "downwardapi-volume-6ad1a597-495f-4b25-8e4f-9992f4340cc0" satisfied condition "success or failure" Mar 18 13:25:28.147: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-6ad1a597-495f-4b25-8e4f-9992f4340cc0 container client-container: STEP: delete the pod Mar 18 13:25:28.232: INFO: Waiting for pod downwardapi-volume-6ad1a597-495f-4b25-8e4f-9992f4340cc0 to disappear Mar 18 13:25:28.248: INFO: Pod downwardapi-volume-6ad1a597-495f-4b25-8e4f-9992f4340cc0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:25:28.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7498" for this suite. Mar 18 13:25:34.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:25:34.351: INFO: namespace downward-api-7498 deletion completed in 6.099828811s • [SLOW TEST:12.293 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:25:34.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-0f8f6235-59e4-4b7f-a98a-48f4881d1c53 STEP: Creating a pod to test consume configMaps Mar 18 13:25:34.418: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-318a4279-b8d0-4398-b206-db66c6c709e7" in namespace "projected-9952" to be "success or failure" Mar 18 13:25:34.421: INFO: Pod "pod-projected-configmaps-318a4279-b8d0-4398-b206-db66c6c709e7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.184112ms Mar 18 13:25:36.426: INFO: Pod "pod-projected-configmaps-318a4279-b8d0-4398-b206-db66c6c709e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007855987s Mar 18 13:25:38.431: INFO: Pod "pod-projected-configmaps-318a4279-b8d0-4398-b206-db66c6c709e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012598519s STEP: Saw pod success Mar 18 13:25:38.431: INFO: Pod "pod-projected-configmaps-318a4279-b8d0-4398-b206-db66c6c709e7" satisfied condition "success or failure" Mar 18 13:25:38.434: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-318a4279-b8d0-4398-b206-db66c6c709e7 container projected-configmap-volume-test: STEP: delete the pod Mar 18 13:25:38.455: INFO: Waiting for pod pod-projected-configmaps-318a4279-b8d0-4398-b206-db66c6c709e7 to disappear Mar 18 13:25:38.457: INFO: Pod pod-projected-configmaps-318a4279-b8d0-4398-b206-db66c6c709e7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:25:38.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9952" for this suite. Mar 18 13:25:44.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:25:44.550: INFO: namespace projected-9952 deletion completed in 6.090176498s • [SLOW TEST:10.199 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:25:44.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 18 13:25:44.618: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 18 13:25:44.641: INFO: Waiting for terminating namespaces to be deleted... Mar 18 13:25:44.643: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 18 13:25:44.649: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 18 13:25:44.649: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 13:25:44.649: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 18 13:25:44.649: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 13:25:44.649: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 18 13:25:44.655: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 18 13:25:44.655: INFO: Container coredns ready: true, restart count 0 Mar 18 13:25:44.655: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 18 13:25:44.655: INFO: Container coredns ready: true, restart count 0 Mar 18 13:25:44.655: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 18 13:25:44.655: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 13:25:44.655: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 18 13:25:44.655: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fd68dd8d441071], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:25:45.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-175" for this suite. Mar 18 13:25:51.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:25:51.773: INFO: namespace sched-pred-175 deletion completed in 6.09433742s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.223 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:25:51.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 18 13:25:51.823: INFO: PodSpec: initContainers in spec.initContainers Mar 18 13:26:39.697: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ae39a7af-f7e9-4ace-ab3e-93774f0439ff", GenerateName:"", Namespace:"init-container-7260", SelfLink:"/api/v1/namespaces/init-container-7260/pods/pod-init-ae39a7af-f7e9-4ace-ab3e-93774f0439ff", UID:"c8c3ba59-c99d-40df-b255-9701dc092c2f", ResourceVersion:"518728", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720134751, loc:(*time.Location)(0x7ea78c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"823697333"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-d6wjd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002348cc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d6wjd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d6wjd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d6wjd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f02df8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002c464e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f02e80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f02ea0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002f02ea8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002f02eac), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720134751, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720134751, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720134751, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720134751, loc:(*time.Location)(0x7ea78c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.18", StartTime:(*v1.Time)(0xc000c78c40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00204a620)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00204a690)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d86bf7d1415e2a6de03184cb72c7d31a6c45632385404c1e516be75236577b4b"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000c78e20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000c78de0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:26:39.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7260" for this suite. Mar 18 13:27:01.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:27:01.796: INFO: namespace init-container-7260 deletion completed in 22.094788862s • [SLOW TEST:70.023 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:27:01.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 18 13:27:08.668: INFO: 0 pods remaining Mar 18 13:27:08.668: INFO: 0 pods has nil DeletionTimestamp Mar 18 13:27:08.668: INFO: Mar 18 13:27:09.092: INFO: 0 pods remaining Mar 18 13:27:09.092: INFO: 0 pods has nil DeletionTimestamp Mar 18 13:27:09.093: INFO: STEP: Gathering metrics W0318 13:27:10.063337 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 13:27:10.063: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:27:10.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5615" for this suite. Mar 18 13:27:16.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:27:16.160: INFO: namespace gc-5615 deletion completed in 6.093960704s • [SLOW TEST:14.364 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:27:16.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-056d5f52-a863-489e-a911-318381990082 STEP: Creating a pod to test consume secrets Mar 18 13:27:16.250: INFO: Waiting up to 5m0s for pod "pod-secrets-1d7791ca-6543-4477-9fae-2bb2caba615a" in namespace "secrets-8829" to be "success or failure" Mar 18 13:27:16.326: INFO: Pod "pod-secrets-1d7791ca-6543-4477-9fae-2bb2caba615a": Phase="Pending", Reason="", readiness=false. Elapsed: 76.092622ms Mar 18 13:27:18.331: INFO: Pod "pod-secrets-1d7791ca-6543-4477-9fae-2bb2caba615a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080346528s Mar 18 13:27:20.335: INFO: Pod "pod-secrets-1d7791ca-6543-4477-9fae-2bb2caba615a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084575583s STEP: Saw pod success Mar 18 13:27:20.335: INFO: Pod "pod-secrets-1d7791ca-6543-4477-9fae-2bb2caba615a" satisfied condition "success or failure" Mar 18 13:27:20.338: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-1d7791ca-6543-4477-9fae-2bb2caba615a container secret-volume-test: STEP: delete the pod Mar 18 13:27:20.371: INFO: Waiting for pod pod-secrets-1d7791ca-6543-4477-9fae-2bb2caba615a to disappear Mar 18 13:27:20.382: INFO: Pod pod-secrets-1d7791ca-6543-4477-9fae-2bb2caba615a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:27:20.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8829" for this suite. Mar 18 13:27:26.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:27:26.478: INFO: namespace secrets-8829 deletion completed in 6.0927141s • [SLOW TEST:10.317 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:27:26.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 13:27:26.539: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c12c8e6-c177-4eb3-830f-2004913e0991" in namespace "projected-4709" to be "success or failure" Mar 18 13:27:26.543: INFO: Pod "downwardapi-volume-8c12c8e6-c177-4eb3-830f-2004913e0991": Phase="Pending", Reason="", readiness=false. Elapsed: 3.787983ms Mar 18 13:27:28.547: INFO: Pod "downwardapi-volume-8c12c8e6-c177-4eb3-830f-2004913e0991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008002996s Mar 18 13:27:30.552: INFO: Pod "downwardapi-volume-8c12c8e6-c177-4eb3-830f-2004913e0991": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012401022s STEP: Saw pod success Mar 18 13:27:30.552: INFO: Pod "downwardapi-volume-8c12c8e6-c177-4eb3-830f-2004913e0991" satisfied condition "success or failure" Mar 18 13:27:30.555: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8c12c8e6-c177-4eb3-830f-2004913e0991 container client-container: STEP: delete the pod Mar 18 13:27:30.587: INFO: Waiting for pod downwardapi-volume-8c12c8e6-c177-4eb3-830f-2004913e0991 to disappear Mar 18 13:27:30.591: INFO: Pod downwardapi-volume-8c12c8e6-c177-4eb3-830f-2004913e0991 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:27:30.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4709" for this suite. Mar 18 13:27:36.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:27:36.690: INFO: namespace projected-4709 deletion completed in 6.095208488s • [SLOW TEST:10.211 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:27:36.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-d647f429-4c07-49b5-be35-4c15fe9560ea STEP: Creating a pod to test consume configMaps Mar 18 13:27:36.761: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a643d86-d09c-481b-87da-89bab4890c0e" in namespace "configmap-3241" to be "success or failure" Mar 18 13:27:36.781: INFO: Pod "pod-configmaps-9a643d86-d09c-481b-87da-89bab4890c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.856663ms Mar 18 13:27:38.799: INFO: Pod "pod-configmaps-9a643d86-d09c-481b-87da-89bab4890c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037978354s Mar 18 13:27:40.804: INFO: Pod "pod-configmaps-9a643d86-d09c-481b-87da-89bab4890c0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042542954s STEP: Saw pod success Mar 18 13:27:40.804: INFO: Pod "pod-configmaps-9a643d86-d09c-481b-87da-89bab4890c0e" satisfied condition "success or failure" Mar 18 13:27:40.807: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-9a643d86-d09c-481b-87da-89bab4890c0e container configmap-volume-test: STEP: delete the pod Mar 18 13:27:40.845: INFO: Waiting for pod pod-configmaps-9a643d86-d09c-481b-87da-89bab4890c0e to disappear Mar 18 13:27:40.865: INFO: Pod pod-configmaps-9a643d86-d09c-481b-87da-89bab4890c0e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:27:40.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3241" for this suite. Mar 18 13:27:46.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:27:46.988: INFO: namespace configmap-3241 deletion completed in 6.118972424s • [SLOW TEST:10.298 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:27:46.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 18 13:27:47.114: INFO: namespace kubectl-357 Mar 18 13:27:47.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-357' Mar 18 13:27:47.386: INFO: stderr: "" Mar 18 13:27:47.386: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 18 13:27:48.391: INFO: Selector matched 1 pods for map[app:redis] Mar 18 13:27:48.391: INFO: Found 0 / 1 Mar 18 13:27:49.391: INFO: Selector matched 1 pods for map[app:redis] Mar 18 13:27:49.392: INFO: Found 0 / 1 Mar 18 13:27:50.391: INFO: Selector matched 1 pods for map[app:redis] Mar 18 13:27:50.391: INFO: Found 1 / 1 Mar 18 13:27:50.391: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 18 13:27:50.395: INFO: Selector matched 1 pods for map[app:redis] Mar 18 13:27:50.395: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 18 13:27:50.395: INFO: wait on redis-master startup in kubectl-357 Mar 18 13:27:50.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qj45s redis-master --namespace=kubectl-357' Mar 18 13:27:50.508: INFO: stderr: "" Mar 18 13:27:50.508: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 18 Mar 13:27:49.741 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Mar 13:27:49.741 # Server started, Redis version 3.2.12\n1:M 18 Mar 13:27:49.741 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Mar 13:27:49.741 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 18 13:27:50.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-357' Mar 18 13:27:50.647: INFO: stderr: "" Mar 18 13:27:50.647: INFO: stdout: "service/rm2 exposed\n" Mar 18 13:27:50.656: INFO: Service rm2 in namespace kubectl-357 found. STEP: exposing service Mar 18 13:27:52.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-357' Mar 18 13:27:52.798: INFO: stderr: "" Mar 18 13:27:52.798: INFO: stdout: "service/rm3 exposed\n" Mar 18 13:27:52.847: INFO: Service rm3 in namespace kubectl-357 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:27:54.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-357" for this suite. Mar 18 13:28:16.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:28:16.956: INFO: namespace kubectl-357 deletion completed in 22.097355766s • [SLOW TEST:29.968 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:28:16.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 18 13:28:17.561: INFO: Pod name wrapped-volume-race-ddd27602-2956-4522-a5e8-3450e552b4bc: Found 0 pods out of 5 Mar 18 13:28:22.570: INFO: Pod name wrapped-volume-race-ddd27602-2956-4522-a5e8-3450e552b4bc: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ddd27602-2956-4522-a5e8-3450e552b4bc in namespace emptydir-wrapper-1681, will wait for the garbage collector to delete the pods Mar 18 13:28:36.660: INFO: Deleting ReplicationController wrapped-volume-race-ddd27602-2956-4522-a5e8-3450e552b4bc took: 7.392304ms Mar 18 13:28:36.961: INFO: Terminating ReplicationController wrapped-volume-race-ddd27602-2956-4522-a5e8-3450e552b4bc pods took: 300.42688ms STEP: Creating RC which spawns configmap-volume pods Mar 18 13:29:13.621: INFO: Pod name wrapped-volume-race-e5bec1cf-1007-49f0-b2af-f93b8a29123f: Found 0 pods out of 5 Mar 18 13:29:18.655: INFO: Pod name wrapped-volume-race-e5bec1cf-1007-49f0-b2af-f93b8a29123f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e5bec1cf-1007-49f0-b2af-f93b8a29123f in namespace emptydir-wrapper-1681, will wait for the garbage collector to delete the pods Mar 18 13:29:32.758: INFO: Deleting ReplicationController wrapped-volume-race-e5bec1cf-1007-49f0-b2af-f93b8a29123f took: 8.081577ms Mar 18 13:29:33.059: INFO: Terminating ReplicationController wrapped-volume-race-e5bec1cf-1007-49f0-b2af-f93b8a29123f pods took: 300.247885ms STEP: Creating RC which spawns configmap-volume pods Mar 18 13:30:12.285: INFO: Pod name wrapped-volume-race-d817f858-2a54-4a48-a790-2917c3c94a2a: Found 0 pods out of 5 Mar 18 13:30:17.319: INFO: Pod name wrapped-volume-race-d817f858-2a54-4a48-a790-2917c3c94a2a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d817f858-2a54-4a48-a790-2917c3c94a2a in namespace emptydir-wrapper-1681, will wait for the garbage collector to delete the pods Mar 18 13:30:31.415: INFO: Deleting ReplicationController wrapped-volume-race-d817f858-2a54-4a48-a790-2917c3c94a2a took: 7.621018ms Mar 18 13:30:31.715: INFO: Terminating ReplicationController wrapped-volume-race-d817f858-2a54-4a48-a790-2917c3c94a2a pods took: 300.267076ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:31:12.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1681" for this suite. Mar 18 13:31:20.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:31:20.969: INFO: namespace emptydir-wrapper-1681 deletion completed in 8.130533575s • [SLOW TEST:184.013 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:31:20.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 13:31:21.001: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:31:25.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2096" for this suite. Mar 18 13:32:15.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:32:15.158: INFO: namespace pods-2096 deletion completed in 50.103781642s • [SLOW TEST:54.189 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:32:15.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2331 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2331 STEP: Creating statefulset with conflicting port in namespace statefulset-2331 STEP: Waiting until pod test-pod will start running in namespace statefulset-2331 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2331 Mar 18 13:32:19.261: INFO: Observed stateful pod in namespace: statefulset-2331, name: ss-0, uid: d26e7ebc-b3f0-43f4-b331-8d35e1e2d7c3, status phase: Pending. Waiting for statefulset controller to delete. Mar 18 13:32:22.148: INFO: Observed stateful pod in namespace: statefulset-2331, name: ss-0, uid: d26e7ebc-b3f0-43f4-b331-8d35e1e2d7c3, status phase: Failed. Waiting for statefulset controller to delete. Mar 18 13:32:22.200: INFO: Observed stateful pod in namespace: statefulset-2331, name: ss-0, uid: d26e7ebc-b3f0-43f4-b331-8d35e1e2d7c3, status phase: Failed. Waiting for statefulset controller to delete. Mar 18 13:32:22.214: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2331 STEP: Removing pod with conflicting port in namespace statefulset-2331 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2331 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 18 13:32:26.278: INFO: Deleting all statefulset in ns statefulset-2331 Mar 18 13:32:26.282: INFO: Scaling statefulset ss to 0 Mar 18 13:32:36.299: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 13:32:36.302: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:32:36.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2331" for this suite. Mar 18 13:32:42.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:32:42.458: INFO: namespace statefulset-2331 deletion completed in 6.09089053s • [SLOW TEST:27.300 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:32:42.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-c665c639-6f9c-4d26-a213-d7f6d1b0d267 STEP: Creating a pod to test consume secrets Mar 18 13:32:42.542: INFO: Waiting up to 5m0s for pod "pod-secrets-f2bf2baf-4476-491f-a4dd-c35ea6c05177" in namespace "secrets-5973" to be "success or failure" Mar 18 13:32:42.558: INFO: Pod "pod-secrets-f2bf2baf-4476-491f-a4dd-c35ea6c05177": Phase="Pending", Reason="", readiness=false. Elapsed: 15.299498ms Mar 18 13:32:44.561: INFO: Pod "pod-secrets-f2bf2baf-4476-491f-a4dd-c35ea6c05177": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018952433s Mar 18 13:32:46.566: INFO: Pod "pod-secrets-f2bf2baf-4476-491f-a4dd-c35ea6c05177": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023445542s STEP: Saw pod success Mar 18 13:32:46.566: INFO: Pod "pod-secrets-f2bf2baf-4476-491f-a4dd-c35ea6c05177" satisfied condition "success or failure" Mar 18 13:32:46.569: INFO: Trying to get logs from node iruya-worker pod pod-secrets-f2bf2baf-4476-491f-a4dd-c35ea6c05177 container secret-volume-test: STEP: delete the pod Mar 18 13:32:46.603: INFO: Waiting for pod pod-secrets-f2bf2baf-4476-491f-a4dd-c35ea6c05177 to disappear Mar 18 13:32:46.612: INFO: Pod pod-secrets-f2bf2baf-4476-491f-a4dd-c35ea6c05177 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:32:46.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5973" for this suite. Mar 18 13:32:52.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:32:52.712: INFO: namespace secrets-5973 deletion completed in 6.096712899s • [SLOW TEST:10.253 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:32:52.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-7e37b48f-85d5-463d-bdd4-9680bb43a0d2 in namespace container-probe-3609 Mar 18 13:32:56.784: INFO: Started pod test-webserver-7e37b48f-85d5-463d-bdd4-9680bb43a0d2 in namespace container-probe-3609 STEP: checking the pod's current state and verifying that restartCount is present Mar 18 13:32:56.787: INFO: Initial restart count of pod test-webserver-7e37b48f-85d5-463d-bdd4-9680bb43a0d2 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:36:57.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3609" for this suite. Mar 18 13:37:03.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:37:03.510: INFO: namespace container-probe-3609 deletion completed in 6.133405071s • [SLOW TEST:250.797 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:37:03.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 18 13:37:03.582: INFO: Waiting up to 5m0s for pod "pod-67b0db10-4411-4437-b2cb-b018de322ddc" in namespace "emptydir-7782" to be "success or failure" Mar 18 13:37:03.586: INFO: Pod "pod-67b0db10-4411-4437-b2cb-b018de322ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.269038ms Mar 18 13:37:05.590: INFO: Pod "pod-67b0db10-4411-4437-b2cb-b018de322ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007270724s Mar 18 13:37:07.594: INFO: Pod "pod-67b0db10-4411-4437-b2cb-b018de322ddc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011635216s STEP: Saw pod success Mar 18 13:37:07.594: INFO: Pod "pod-67b0db10-4411-4437-b2cb-b018de322ddc" satisfied condition "success or failure" Mar 18 13:37:07.597: INFO: Trying to get logs from node iruya-worker pod pod-67b0db10-4411-4437-b2cb-b018de322ddc container test-container: STEP: delete the pod Mar 18 13:37:07.626: INFO: Waiting for pod pod-67b0db10-4411-4437-b2cb-b018de322ddc to disappear Mar 18 13:37:07.652: INFO: Pod pod-67b0db10-4411-4437-b2cb-b018de322ddc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:37:07.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7782" for this suite. Mar 18 13:37:13.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:37:13.739: INFO: namespace emptydir-7782 deletion completed in 6.083549573s • [SLOW TEST:10.229 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:37:13.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-0ebefdb8-4c9c-43c0-8884-91c8df6e7d1e STEP: Creating a pod to test consume secrets Mar 18 13:37:13.908: INFO: Waiting up to 5m0s for pod "pod-secrets-9fe8ff7a-e8c5-425e-b3ea-24ed1747fefe" in namespace "secrets-6797" to be "success or failure" Mar 18 13:37:13.912: INFO: Pod "pod-secrets-9fe8ff7a-e8c5-425e-b3ea-24ed1747fefe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.925436ms Mar 18 13:37:15.936: INFO: Pod "pod-secrets-9fe8ff7a-e8c5-425e-b3ea-24ed1747fefe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027940687s Mar 18 13:37:17.940: INFO: Pod "pod-secrets-9fe8ff7a-e8c5-425e-b3ea-24ed1747fefe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032391618s STEP: Saw pod success Mar 18 13:37:17.940: INFO: Pod "pod-secrets-9fe8ff7a-e8c5-425e-b3ea-24ed1747fefe" satisfied condition "success or failure" Mar 18 13:37:17.943: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-9fe8ff7a-e8c5-425e-b3ea-24ed1747fefe container secret-volume-test: STEP: delete the pod Mar 18 13:37:17.976: INFO: Waiting for pod pod-secrets-9fe8ff7a-e8c5-425e-b3ea-24ed1747fefe to disappear Mar 18 13:37:17.999: INFO: Pod pod-secrets-9fe8ff7a-e8c5-425e-b3ea-24ed1747fefe no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:37:17.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6797" for this suite. Mar 18 13:37:24.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:37:24.100: INFO: namespace secrets-6797 deletion completed in 6.097907196s STEP: Destroying namespace "secret-namespace-7699" for this suite. Mar 18 13:37:30.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:37:30.225: INFO: namespace secret-namespace-7699 deletion completed in 6.125012466s • [SLOW TEST:16.486 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:37:30.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 18 13:37:30.285: INFO: Waiting up to 5m0s for pod "pod-46692111-e3b1-46be-bd68-694dd99d8902" in namespace "emptydir-5375" to be "success or failure" Mar 18 13:37:30.289: INFO: Pod "pod-46692111-e3b1-46be-bd68-694dd99d8902": Phase="Pending", Reason="", readiness=false. Elapsed: 3.468331ms Mar 18 13:37:32.292: INFO: Pod "pod-46692111-e3b1-46be-bd68-694dd99d8902": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006858236s Mar 18 13:37:34.311: INFO: Pod "pod-46692111-e3b1-46be-bd68-694dd99d8902": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025392495s STEP: Saw pod success Mar 18 13:37:34.311: INFO: Pod "pod-46692111-e3b1-46be-bd68-694dd99d8902" satisfied condition "success or failure" Mar 18 13:37:34.314: INFO: Trying to get logs from node iruya-worker pod pod-46692111-e3b1-46be-bd68-694dd99d8902 container test-container: STEP: delete the pod Mar 18 13:37:34.330: INFO: Waiting for pod pod-46692111-e3b1-46be-bd68-694dd99d8902 to disappear Mar 18 13:37:34.334: INFO: Pod pod-46692111-e3b1-46be-bd68-694dd99d8902 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:37:34.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5375" for this suite. Mar 18 13:37:40.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:37:40.432: INFO: namespace emptydir-5375 deletion completed in 6.094969667s • [SLOW TEST:10.207 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:37:40.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 18 13:37:40.546: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-164,SelfLink:/api/v1/namespaces/watch-164/configmaps/e2e-watch-test-watch-closed,UID:d505ffb2-c00e-4a70-bf15-ed60ca8500e8,ResourceVersion:521428,Generation:0,CreationTimestamp:2020-03-18 13:37:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 13:37:40.546: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-164,SelfLink:/api/v1/namespaces/watch-164/configmaps/e2e-watch-test-watch-closed,UID:d505ffb2-c00e-4a70-bf15-ed60ca8500e8,ResourceVersion:521429,Generation:0,CreationTimestamp:2020-03-18 13:37:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 18 13:37:40.558: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-164,SelfLink:/api/v1/namespaces/watch-164/configmaps/e2e-watch-test-watch-closed,UID:d505ffb2-c00e-4a70-bf15-ed60ca8500e8,ResourceVersion:521430,Generation:0,CreationTimestamp:2020-03-18 13:37:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 13:37:40.558: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-164,SelfLink:/api/v1/namespaces/watch-164/configmaps/e2e-watch-test-watch-closed,UID:d505ffb2-c00e-4a70-bf15-ed60ca8500e8,ResourceVersion:521431,Generation:0,CreationTimestamp:2020-03-18 13:37:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:37:40.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-164" for this suite. Mar 18 13:37:46.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:37:46.651: INFO: namespace watch-164 deletion completed in 6.087658838s • [SLOW TEST:6.218 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:37:46.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-635cab34-6d7f-4db1-a826-573453860d02 STEP: Creating secret with name secret-projected-all-test-volume-af09ac4d-3cab-4bb7-be3b-6fbff018896d STEP: Creating a pod to test Check all projections for projected volume plugin Mar 18 13:37:46.774: INFO: Waiting up to 5m0s for pod "projected-volume-6495c028-57fa-4c09-94e4-b19b2f8e96ad" in namespace "projected-307" to be "success or failure" Mar 18 13:37:46.779: INFO: Pod "projected-volume-6495c028-57fa-4c09-94e4-b19b2f8e96ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.92533ms Mar 18 13:37:48.783: INFO: Pod "projected-volume-6495c028-57fa-4c09-94e4-b19b2f8e96ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008757313s Mar 18 13:37:50.787: INFO: Pod "projected-volume-6495c028-57fa-4c09-94e4-b19b2f8e96ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013336949s STEP: Saw pod success Mar 18 13:37:50.788: INFO: Pod "projected-volume-6495c028-57fa-4c09-94e4-b19b2f8e96ad" satisfied condition "success or failure" Mar 18 13:37:50.791: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-6495c028-57fa-4c09-94e4-b19b2f8e96ad container projected-all-volume-test: STEP: delete the pod Mar 18 13:37:50.809: INFO: Waiting for pod projected-volume-6495c028-57fa-4c09-94e4-b19b2f8e96ad to disappear Mar 18 13:37:50.814: INFO: Pod projected-volume-6495c028-57fa-4c09-94e4-b19b2f8e96ad no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:37:50.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-307" for this suite. Mar 18 13:37:56.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:37:56.929: INFO: namespace projected-307 deletion completed in 6.11263839s • [SLOW TEST:10.278 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:37:56.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 18 13:37:56.996: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 18 13:38:02.001: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:38:03.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3965" for this suite. Mar 18 13:38:09.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:38:09.236: INFO: namespace replication-controller-3965 deletion completed in 6.201962798s • [SLOW TEST:12.306 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:38:09.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-2292c253-f08e-4b22-afa2-39abbf34359f STEP: Creating a pod to test consume configMaps Mar 18 13:38:09.407: INFO: Waiting up to 5m0s for pod "pod-configmaps-56edd828-d1e8-4a9b-a64f-06afb662b6c0" in namespace "configmap-8789" to be "success or failure" Mar 18 13:38:09.409: INFO: Pod "pod-configmaps-56edd828-d1e8-4a9b-a64f-06afb662b6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149557ms Mar 18 13:38:11.426: INFO: Pod "pod-configmaps-56edd828-d1e8-4a9b-a64f-06afb662b6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018630344s Mar 18 13:38:13.429: INFO: Pod "pod-configmaps-56edd828-d1e8-4a9b-a64f-06afb662b6c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022525013s STEP: Saw pod success Mar 18 13:38:13.430: INFO: Pod "pod-configmaps-56edd828-d1e8-4a9b-a64f-06afb662b6c0" satisfied condition "success or failure" Mar 18 13:38:13.432: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-56edd828-d1e8-4a9b-a64f-06afb662b6c0 container configmap-volume-test: STEP: delete the pod Mar 18 13:38:13.452: INFO: Waiting for pod pod-configmaps-56edd828-d1e8-4a9b-a64f-06afb662b6c0 to disappear Mar 18 13:38:13.455: INFO: Pod pod-configmaps-56edd828-d1e8-4a9b-a64f-06afb662b6c0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:38:13.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8789" for this suite. Mar 18 13:38:19.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:38:19.560: INFO: namespace configmap-8789 deletion completed in 6.102794946s • [SLOW TEST:10.324 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:38:19.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0318 13:38:29.667472 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 13:38:29.667: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:38:29.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1626" for this suite. Mar 18 13:38:35.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:38:35.762: INFO: namespace gc-1626 deletion completed in 6.091159183s • [SLOW TEST:16.201 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:38:35.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-29bb79c5-cc2f-44cb-b1ba-0540c0367dfc STEP: Creating a pod to test consume secrets Mar 18 13:38:35.841: INFO: Waiting up to 5m0s for pod "pod-secrets-935a3305-b434-4cad-8fc7-e8cfac7c8330" in namespace "secrets-3006" to be "success or failure" Mar 18 13:38:35.847: INFO: Pod "pod-secrets-935a3305-b434-4cad-8fc7-e8cfac7c8330": Phase="Pending", Reason="", readiness=false. Elapsed: 5.746348ms Mar 18 13:38:37.851: INFO: Pod "pod-secrets-935a3305-b434-4cad-8fc7-e8cfac7c8330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010344431s Mar 18 13:38:39.856: INFO: Pod "pod-secrets-935a3305-b434-4cad-8fc7-e8cfac7c8330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014380586s STEP: Saw pod success Mar 18 13:38:39.856: INFO: Pod "pod-secrets-935a3305-b434-4cad-8fc7-e8cfac7c8330" satisfied condition "success or failure" Mar 18 13:38:39.858: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-935a3305-b434-4cad-8fc7-e8cfac7c8330 container secret-env-test: STEP: delete the pod Mar 18 13:38:39.924: INFO: Waiting for pod pod-secrets-935a3305-b434-4cad-8fc7-e8cfac7c8330 to disappear Mar 18 13:38:39.943: INFO: Pod pod-secrets-935a3305-b434-4cad-8fc7-e8cfac7c8330 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:38:39.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3006" for this suite. Mar 18 13:38:45.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:38:46.042: INFO: namespace secrets-3006 deletion completed in 6.094694591s • [SLOW TEST:10.280 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:38:46.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-f979dfbd-c411-4e9c-9616-8d21f64272af STEP: Creating a pod to test consume configMaps Mar 18 13:38:46.111: INFO: Waiting up to 5m0s for pod "pod-configmaps-74b4bd6f-6cd8-473b-868c-9206be9ce73b" in namespace "configmap-1541" to be "success or failure" Mar 18 13:38:46.114: INFO: Pod "pod-configmaps-74b4bd6f-6cd8-473b-868c-9206be9ce73b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.315742ms Mar 18 13:38:48.118: INFO: Pod "pod-configmaps-74b4bd6f-6cd8-473b-868c-9206be9ce73b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007505309s Mar 18 13:38:50.123: INFO: Pod "pod-configmaps-74b4bd6f-6cd8-473b-868c-9206be9ce73b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012094529s STEP: Saw pod success Mar 18 13:38:50.123: INFO: Pod "pod-configmaps-74b4bd6f-6cd8-473b-868c-9206be9ce73b" satisfied condition "success or failure" Mar 18 13:38:50.126: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-74b4bd6f-6cd8-473b-868c-9206be9ce73b container configmap-volume-test: STEP: delete the pod Mar 18 13:38:50.161: INFO: Waiting for pod pod-configmaps-74b4bd6f-6cd8-473b-868c-9206be9ce73b to disappear Mar 18 13:38:50.174: INFO: Pod pod-configmaps-74b4bd6f-6cd8-473b-868c-9206be9ce73b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:38:50.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1541" for this suite. Mar 18 13:38:56.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:38:56.267: INFO: namespace configmap-1541 deletion completed in 6.090217194s • [SLOW TEST:10.224 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:38:56.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 18 13:38:56.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8713' Mar 18 13:38:58.455: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 13:38:58.455: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 18 13:38:58.480: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-wpd7r] Mar 18 13:38:58.480: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-wpd7r" in namespace "kubectl-8713" to be "running and ready" Mar 18 13:38:58.515: INFO: Pod "e2e-test-nginx-rc-wpd7r": Phase="Pending", Reason="", readiness=false. Elapsed: 34.948541ms Mar 18 13:39:00.519: INFO: Pod "e2e-test-nginx-rc-wpd7r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038737907s Mar 18 13:39:02.523: INFO: Pod "e2e-test-nginx-rc-wpd7r": Phase="Running", Reason="", readiness=true. Elapsed: 4.042876331s Mar 18 13:39:02.523: INFO: Pod "e2e-test-nginx-rc-wpd7r" satisfied condition "running and ready" Mar 18 13:39:02.523: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-wpd7r] Mar 18 13:39:02.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-8713' Mar 18 13:39:02.640: INFO: stderr: "" Mar 18 13:39:02.641: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Mar 18 13:39:02.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8713' Mar 18 13:39:02.741: INFO: stderr: "" Mar 18 13:39:02.741: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:39:02.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8713" for this suite. Mar 18 13:39:24.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:39:24.854: INFO: namespace kubectl-8713 deletion completed in 22.092829876s • [SLOW TEST:28.587 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:39:24.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 18 13:39:24.917: INFO: Waiting up to 5m0s for pod "pod-0944a545-c544-4f42-bc17-7330d1fc1bbd" in namespace "emptydir-9172" to be "success or failure" Mar 18 13:39:24.920: INFO: Pod "pod-0944a545-c544-4f42-bc17-7330d1fc1bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.165245ms Mar 18 13:39:26.923: INFO: Pod "pod-0944a545-c544-4f42-bc17-7330d1fc1bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006212131s Mar 18 13:39:28.927: INFO: Pod "pod-0944a545-c544-4f42-bc17-7330d1fc1bbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010395901s STEP: Saw pod success Mar 18 13:39:28.927: INFO: Pod "pod-0944a545-c544-4f42-bc17-7330d1fc1bbd" satisfied condition "success or failure" Mar 18 13:39:28.930: INFO: Trying to get logs from node iruya-worker pod pod-0944a545-c544-4f42-bc17-7330d1fc1bbd container test-container: STEP: delete the pod Mar 18 13:39:28.946: INFO: Waiting for pod pod-0944a545-c544-4f42-bc17-7330d1fc1bbd to disappear Mar 18 13:39:28.950: INFO: Pod pod-0944a545-c544-4f42-bc17-7330d1fc1bbd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:39:28.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9172" for this suite. Mar 18 13:39:34.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:39:35.046: INFO: namespace emptydir-9172 deletion completed in 6.092498107s • [SLOW TEST:10.191 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:39:35.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 13:39:35.104: INFO: Creating deployment "test-recreate-deployment" Mar 18 13:39:35.116: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 18 13:39:35.138: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 18 13:39:37.145: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 18 13:39:37.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720135575, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720135575, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720135575, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720135575, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 13:39:39.150: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 18 13:39:39.156: INFO: Updating deployment test-recreate-deployment Mar 18 13:39:39.156: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 18 13:39:39.415: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-1346,SelfLink:/apis/apps/v1/namespaces/deployment-1346/deployments/test-recreate-deployment,UID:aff2e75a-67d4-41ba-85ce-365a9b4e0986,ResourceVersion:521962,Generation:2,CreationTimestamp:2020-03-18 13:39:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-18 13:39:39 +0000 UTC 2020-03-18 13:39:39 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-18 13:39:39 +0000 UTC 2020-03-18 13:39:35 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 18 13:39:39.444: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-1346,SelfLink:/apis/apps/v1/namespaces/deployment-1346/replicasets/test-recreate-deployment-5c8c9cc69d,UID:5f82083a-d177-478e-8d16-16e1dce5beb2,ResourceVersion:521959,Generation:1,CreationTimestamp:2020-03-18 13:39:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment aff2e75a-67d4-41ba-85ce-365a9b4e0986 0xc002857b27 0xc002857b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 18 13:39:39.444: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 18 13:39:39.444: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-1346,SelfLink:/apis/apps/v1/namespaces/deployment-1346/replicasets/test-recreate-deployment-6df85df6b9,UID:35ba63da-0499-4c0a-a646-3008d0795828,ResourceVersion:521951,Generation:2,CreationTimestamp:2020-03-18 13:39:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment aff2e75a-67d4-41ba-85ce-365a9b4e0986 0xc002857bf7 0xc002857bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 18 13:39:39.448: INFO: Pod "test-recreate-deployment-5c8c9cc69d-vckfg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-vckfg,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-1346,SelfLink:/api/v1/namespaces/deployment-1346/pods/test-recreate-deployment-5c8c9cc69d-vckfg,UID:d0121ba3-800a-4f66-9e11-ab92c67ddaef,ResourceVersion:521963,Generation:0,CreationTimestamp:2020-03-18 13:39:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 5f82083a-d177-478e-8d16-16e1dce5beb2 0xc002fe24a7 0xc002fe24a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-phvqt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-phvqt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-phvqt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fe2520} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fe2540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 13:39:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 13:39:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 13:39:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 13:39:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-18 13:39:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:39:39.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1346" for this suite. Mar 18 13:39:45.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:39:45.751: INFO: namespace deployment-1346 deletion completed in 6.299125296s • [SLOW TEST:10.704 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:39:45.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-d2ce02b6-df96-4d80-8b7c-8f084ed01a87 STEP: Creating a pod to test consume configMaps Mar 18 13:39:45.839: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bce0e88b-0430-4c56-8bc1-5b98c306b409" in namespace "projected-8421" to be "success or failure" Mar 18 13:39:45.860: INFO: Pod "pod-projected-configmaps-bce0e88b-0430-4c56-8bc1-5b98c306b409": Phase="Pending", Reason="", readiness=false. Elapsed: 20.25375ms Mar 18 13:39:47.864: INFO: Pod "pod-projected-configmaps-bce0e88b-0430-4c56-8bc1-5b98c306b409": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024895197s Mar 18 13:39:49.868: INFO: Pod "pod-projected-configmaps-bce0e88b-0430-4c56-8bc1-5b98c306b409": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029009897s STEP: Saw pod success Mar 18 13:39:49.868: INFO: Pod "pod-projected-configmaps-bce0e88b-0430-4c56-8bc1-5b98c306b409" satisfied condition "success or failure" Mar 18 13:39:49.872: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-bce0e88b-0430-4c56-8bc1-5b98c306b409 container projected-configmap-volume-test: STEP: delete the pod Mar 18 13:39:49.904: INFO: Waiting for pod pod-projected-configmaps-bce0e88b-0430-4c56-8bc1-5b98c306b409 to disappear Mar 18 13:39:49.915: INFO: Pod pod-projected-configmaps-bce0e88b-0430-4c56-8bc1-5b98c306b409 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:39:49.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8421" for this suite. Mar 18 13:39:55.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:39:56.004: INFO: namespace projected-8421 deletion completed in 6.085948816s • [SLOW TEST:10.254 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:39:56.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 18 13:40:04.251: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 13:40:04.291: INFO: Pod pod-with-poststart-http-hook still exists Mar 18 13:40:06.291: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 13:40:06.319: INFO: Pod pod-with-poststart-http-hook still exists Mar 18 13:40:08.291: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 13:40:08.296: INFO: Pod pod-with-poststart-http-hook still exists Mar 18 13:40:10.291: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 13:40:10.295: INFO: Pod pod-with-poststart-http-hook still exists Mar 18 13:40:12.291: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 18 13:40:12.295: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:40:12.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4824" for this suite. Mar 18 13:40:34.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:40:34.391: INFO: namespace container-lifecycle-hook-4824 deletion completed in 22.093188228s • [SLOW TEST:38.386 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:40:34.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 18 13:40:39.020: INFO: Successfully updated pod "labelsupdate6955c270-b2ca-4bb5-8d2b-df56d8870620" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:40:41.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4689" for this suite. Mar 18 13:41:03.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:41:03.179: INFO: namespace downward-api-4689 deletion completed in 22.117443937s • [SLOW TEST:28.787 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:41:03.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 18 13:41:07.782: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3c3b48fb-1185-419e-9254-2d5a78dbd64e" Mar 18 13:41:07.782: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3c3b48fb-1185-419e-9254-2d5a78dbd64e" in namespace "pods-9478" to be "terminated due to deadline exceeded" Mar 18 13:41:07.798: INFO: Pod "pod-update-activedeadlineseconds-3c3b48fb-1185-419e-9254-2d5a78dbd64e": Phase="Running", Reason="", readiness=true. Elapsed: 16.419185ms Mar 18 13:41:09.803: INFO: Pod "pod-update-activedeadlineseconds-3c3b48fb-1185-419e-9254-2d5a78dbd64e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.020937339s Mar 18 13:41:09.803: INFO: Pod "pod-update-activedeadlineseconds-3c3b48fb-1185-419e-9254-2d5a78dbd64e" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:41:09.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9478" for this suite. Mar 18 13:41:15.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:41:15.906: INFO: namespace pods-9478 deletion completed in 6.097859328s • [SLOW TEST:12.727 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:41:15.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-719 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 18 13:41:15.949: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 18 13:41:40.064: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.58:8080/dial?request=hostName&protocol=udp&host=10.244.1.38&port=8081&tries=1'] Namespace:pod-network-test-719 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 13:41:40.064: INFO: >>> kubeConfig: /root/.kube/config I0318 13:41:40.102512 6 log.go:172] (0xc000fae790) (0xc0012e66e0) Create stream I0318 13:41:40.102543 6 log.go:172] (0xc000fae790) (0xc0012e66e0) Stream added, broadcasting: 1 I0318 13:41:40.105230 6 log.go:172] (0xc000fae790) Reply frame received for 1 I0318 13:41:40.105264 6 log.go:172] (0xc000fae790) (0xc0012e68c0) Create stream I0318 13:41:40.105271 6 log.go:172] (0xc000fae790) (0xc0012e68c0) Stream added, broadcasting: 3 I0318 13:41:40.106254 6 log.go:172] (0xc000fae790) Reply frame received for 3 I0318 13:41:40.106295 6 log.go:172] (0xc000fae790) (0xc0019d1220) Create stream I0318 13:41:40.106311 6 log.go:172] (0xc000fae790) (0xc0019d1220) Stream added, broadcasting: 5 I0318 13:41:40.107224 6 log.go:172] (0xc000fae790) Reply frame received for 5 I0318 13:41:40.188218 6 log.go:172] (0xc000fae790) Data frame received for 3 I0318 13:41:40.188251 6 log.go:172] (0xc0012e68c0) (3) Data frame handling I0318 13:41:40.188271 6 log.go:172] (0xc0012e68c0) (3) Data frame sent I0318 13:41:40.189031 6 log.go:172] (0xc000fae790) Data frame received for 3 I0318 13:41:40.189268 6 log.go:172] (0xc0012e68c0) (3) Data frame handling I0318 13:41:40.189308 6 log.go:172] (0xc000fae790) Data frame received for 5 I0318 13:41:40.189338 6 log.go:172] (0xc0019d1220) (5) Data frame handling I0318 13:41:40.191574 6 log.go:172] (0xc000fae790) Data frame received for 1 I0318 13:41:40.191589 6 log.go:172] (0xc0012e66e0) (1) Data frame handling I0318 13:41:40.191601 6 log.go:172] (0xc0012e66e0) (1) Data frame sent I0318 13:41:40.191617 6 log.go:172] (0xc000fae790) (0xc0012e66e0) Stream removed, broadcasting: 1 I0318 13:41:40.191696 6 log.go:172] (0xc000fae790) Go away received I0318 13:41:40.191757 6 log.go:172] (0xc000fae790) (0xc0012e66e0) Stream removed, broadcasting: 1 I0318 13:41:40.191808 6 log.go:172] (0xc000fae790) (0xc0012e68c0) Stream removed, broadcasting: 3 I0318 13:41:40.191823 6 log.go:172] (0xc000fae790) (0xc0019d1220) Stream removed, broadcasting: 5 Mar 18 13:41:40.191: INFO: Waiting for endpoints: map[] Mar 18 13:41:40.195: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.58:8080/dial?request=hostName&protocol=udp&host=10.244.2.57&port=8081&tries=1'] Namespace:pod-network-test-719 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 13:41:40.195: INFO: >>> kubeConfig: /root/.kube/config I0318 13:41:40.232103 6 log.go:172] (0xc000da9550) (0xc0019d1860) Create stream I0318 13:41:40.232153 6 log.go:172] (0xc000da9550) (0xc0019d1860) Stream added, broadcasting: 1 I0318 13:41:40.234905 6 log.go:172] (0xc000da9550) Reply frame received for 1 I0318 13:41:40.234948 6 log.go:172] (0xc000da9550) (0xc0019d1900) Create stream I0318 13:41:40.234963 6 log.go:172] (0xc000da9550) (0xc0019d1900) Stream added, broadcasting: 3 I0318 13:41:40.236101 6 log.go:172] (0xc000da9550) Reply frame received for 3 I0318 13:41:40.236143 6 log.go:172] (0xc000da9550) (0xc0012e6b40) Create stream I0318 13:41:40.236158 6 log.go:172] (0xc000da9550) (0xc0012e6b40) Stream added, broadcasting: 5 I0318 13:41:40.236986 6 log.go:172] (0xc000da9550) Reply frame received for 5 I0318 13:41:40.291950 6 log.go:172] (0xc000da9550) Data frame received for 3 I0318 13:41:40.291985 6 log.go:172] (0xc0019d1900) (3) Data frame handling I0318 13:41:40.292012 6 log.go:172] (0xc0019d1900) (3) Data frame sent I0318 13:41:40.292754 6 log.go:172] (0xc000da9550) Data frame received for 3 I0318 13:41:40.292776 6 log.go:172] (0xc0019d1900) (3) Data frame handling I0318 13:41:40.292798 6 log.go:172] (0xc000da9550) Data frame received for 5 I0318 13:41:40.292811 6 log.go:172] (0xc0012e6b40) (5) Data frame handling I0318 13:41:40.294434 6 log.go:172] (0xc000da9550) Data frame received for 1 I0318 13:41:40.294470 6 log.go:172] (0xc0019d1860) (1) Data frame handling I0318 13:41:40.294495 6 log.go:172] (0xc0019d1860) (1) Data frame sent I0318 13:41:40.294514 6 log.go:172] (0xc000da9550) (0xc0019d1860) Stream removed, broadcasting: 1 I0318 13:41:40.294532 6 log.go:172] (0xc000da9550) Go away received I0318 13:41:40.294682 6 log.go:172] (0xc000da9550) (0xc0019d1860) Stream removed, broadcasting: 1 I0318 13:41:40.294708 6 log.go:172] (0xc000da9550) (0xc0019d1900) Stream removed, broadcasting: 3 I0318 13:41:40.294729 6 log.go:172] (0xc000da9550) (0xc0012e6b40) Stream removed, broadcasting: 5 Mar 18 13:41:40.294: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:41:40.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-719" for this suite. Mar 18 13:42:02.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:42:02.407: INFO: namespace pod-network-test-719 deletion completed in 22.108869936s • [SLOW TEST:46.501 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:42:02.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Mar 18 13:42:07.015: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9006 pod-service-account-dfb9baea-2b05-4b7d-8d4d-4e20aa35d137 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 18 13:42:07.261: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9006 pod-service-account-dfb9baea-2b05-4b7d-8d4d-4e20aa35d137 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 18 13:42:07.450: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9006 pod-service-account-dfb9baea-2b05-4b7d-8d4d-4e20aa35d137 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:42:07.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9006" for this suite. Mar 18 13:42:13.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:42:13.758: INFO: namespace svcaccounts-9006 deletion completed in 6.089646943s • [SLOW TEST:11.351 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:42:13.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-78276717-31f8-4fc6-8c6e-f37fc877a2d4 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:42:13.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7577" for this suite. Mar 18 13:42:19.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:42:19.933: INFO: namespace secrets-7577 deletion completed in 6.102964073s • [SLOW TEST:6.174 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:42:19.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Mar 18 13:42:20.012: INFO: Waiting up to 5m0s for pod "client-containers-c61608a7-1d45-440c-a7c0-a1fdc27f7f6e" in namespace "containers-8030" to be "success or failure" Mar 18 13:42:20.029: INFO: Pod "client-containers-c61608a7-1d45-440c-a7c0-a1fdc27f7f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.499102ms Mar 18 13:42:22.033: INFO: Pod "client-containers-c61608a7-1d45-440c-a7c0-a1fdc27f7f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021419228s Mar 18 13:42:24.038: INFO: Pod "client-containers-c61608a7-1d45-440c-a7c0-a1fdc27f7f6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025747715s STEP: Saw pod success Mar 18 13:42:24.038: INFO: Pod "client-containers-c61608a7-1d45-440c-a7c0-a1fdc27f7f6e" satisfied condition "success or failure" Mar 18 13:42:24.041: INFO: Trying to get logs from node iruya-worker2 pod client-containers-c61608a7-1d45-440c-a7c0-a1fdc27f7f6e container test-container: STEP: delete the pod Mar 18 13:42:24.063: INFO: Waiting for pod client-containers-c61608a7-1d45-440c-a7c0-a1fdc27f7f6e to disappear Mar 18 13:42:24.068: INFO: Pod client-containers-c61608a7-1d45-440c-a7c0-a1fdc27f7f6e no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:42:24.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8030" for this suite. Mar 18 13:42:30.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:42:30.168: INFO: namespace containers-8030 deletion completed in 6.097182009s • [SLOW TEST:10.234 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:42:30.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 18 13:42:30.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8142' Mar 18 13:42:30.310: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 13:42:30.310: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Mar 18 13:42:30.330: INFO: scanned /root for discovery docs: Mar 18 13:42:30.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-8142' Mar 18 13:42:46.200: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 18 13:42:46.200: INFO: stdout: "Created e2e-test-nginx-rc-ac745f596e53904f3186e9cf7607aba3\nScaling up e2e-test-nginx-rc-ac745f596e53904f3186e9cf7607aba3 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ac745f596e53904f3186e9cf7607aba3 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ac745f596e53904f3186e9cf7607aba3 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 18 13:42:46.201: INFO: stdout: "Created e2e-test-nginx-rc-ac745f596e53904f3186e9cf7607aba3\nScaling up e2e-test-nginx-rc-ac745f596e53904f3186e9cf7607aba3 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ac745f596e53904f3186e9cf7607aba3 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ac745f596e53904f3186e9cf7607aba3 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 18 13:42:46.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8142' Mar 18 13:42:46.292: INFO: stderr: "" Mar 18 13:42:46.292: INFO: stdout: "e2e-test-nginx-rc-ac745f596e53904f3186e9cf7607aba3-d82mv " Mar 18 13:42:46.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ac745f596e53904f3186e9cf7607aba3-d82mv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8142' Mar 18 13:42:46.379: INFO: stderr: "" Mar 18 13:42:46.379: INFO: stdout: "true" Mar 18 13:42:46.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ac745f596e53904f3186e9cf7607aba3-d82mv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8142' Mar 18 13:42:46.467: INFO: stderr: "" Mar 18 13:42:46.467: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 18 13:42:46.467: INFO: e2e-test-nginx-rc-ac745f596e53904f3186e9cf7607aba3-d82mv is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Mar 18 13:42:46.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8142' Mar 18 13:42:46.587: INFO: stderr: "" Mar 18 13:42:46.587: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:42:46.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8142" for this suite. Mar 18 13:42:52.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:42:52.732: INFO: namespace kubectl-8142 deletion completed in 6.135083414s • [SLOW TEST:22.563 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:42:52.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:42:59.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2851" for this suite. Mar 18 13:43:05.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:43:05.196: INFO: namespace namespaces-2851 deletion completed in 6.103798747s STEP: Destroying namespace "nsdeletetest-2320" for this suite. Mar 18 13:43:05.199: INFO: Namespace nsdeletetest-2320 was already deleted STEP: Destroying namespace "nsdeletetest-6318" for this suite. Mar 18 13:43:11.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:43:11.291: INFO: namespace nsdeletetest-6318 deletion completed in 6.092132259s • [SLOW TEST:18.558 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:43:11.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 18 13:43:11.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1397' Mar 18 13:43:11.424: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 18 13:43:11.424: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Mar 18 13:43:11.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1397' Mar 18 13:43:11.763: INFO: stderr: "" Mar 18 13:43:11.763: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:43:11.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1397" for this suite. Mar 18 13:43:17.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:43:17.874: INFO: namespace kubectl-1397 deletion completed in 6.097379083s • [SLOW TEST:6.582 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:43:17.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:43:17.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9015" for this suite. Mar 18 13:43:23.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:43:24.062: INFO: namespace services-9015 deletion completed in 6.096030868s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.188 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:43:24.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 18 13:43:24.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4383' Mar 18 13:43:24.201: INFO: stderr: "" Mar 18 13:43:24.201: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Mar 18 13:43:24.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4383' Mar 18 13:43:31.862: INFO: stderr: "" Mar 18 13:43:31.862: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:43:31.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4383" for this suite. Mar 18 13:43:37.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:43:37.954: INFO: namespace kubectl-4383 deletion completed in 6.089082404s • [SLOW TEST:13.892 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:43:37.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-72ca4145-2165-4904-ab3a-a8aefafbe7f5 STEP: Creating a pod to test consume secrets Mar 18 13:43:38.037: INFO: Waiting up to 5m0s for pod "pod-secrets-b92a6bd0-ffec-493f-b580-8196f2e74fee" in namespace "secrets-8340" to be "success or failure" Mar 18 13:43:38.056: INFO: Pod "pod-secrets-b92a6bd0-ffec-493f-b580-8196f2e74fee": Phase="Pending", Reason="", readiness=false. Elapsed: 18.709777ms Mar 18 13:43:40.060: INFO: Pod "pod-secrets-b92a6bd0-ffec-493f-b580-8196f2e74fee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022741217s Mar 18 13:43:42.064: INFO: Pod "pod-secrets-b92a6bd0-ffec-493f-b580-8196f2e74fee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026598123s STEP: Saw pod success Mar 18 13:43:42.064: INFO: Pod "pod-secrets-b92a6bd0-ffec-493f-b580-8196f2e74fee" satisfied condition "success or failure" Mar 18 13:43:42.067: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-b92a6bd0-ffec-493f-b580-8196f2e74fee container secret-volume-test: STEP: delete the pod Mar 18 13:43:42.085: INFO: Waiting for pod pod-secrets-b92a6bd0-ffec-493f-b580-8196f2e74fee to disappear Mar 18 13:43:42.089: INFO: Pod pod-secrets-b92a6bd0-ffec-493f-b580-8196f2e74fee no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:43:42.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8340" for this suite. Mar 18 13:43:48.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:43:48.194: INFO: namespace secrets-8340 deletion completed in 6.10061865s • [SLOW TEST:10.239 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:43:48.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-eff9012a-1428-4569-ba66-b9ba1565ea22 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-eff9012a-1428-4569-ba66-b9ba1565ea22 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:45:16.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-861" for this suite. Mar 18 13:45:38.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:45:38.864: INFO: namespace configmap-861 deletion completed in 22.096408928s • [SLOW TEST:110.670 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:45:38.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 13:45:38.923: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:45:40.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4002" for this suite. Mar 18 13:45:46.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:45:46.130: INFO: namespace custom-resource-definition-4002 deletion completed in 6.122885272s • [SLOW TEST:7.266 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:45:46.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 18 13:45:46.571: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 13:45:46.574: INFO: Number of nodes with available pods: 0 Mar 18 13:45:46.574: INFO: Node iruya-worker is running more than one daemon pod Mar 18 13:45:47.579: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 13:45:47.582: INFO: Number of nodes with available pods: 0 Mar 18 13:45:47.582: INFO: Node iruya-worker is running more than one daemon pod Mar 18 13:45:48.580: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 13:45:48.584: INFO: Number of nodes with available pods: 0 Mar 18 13:45:48.584: INFO: Node iruya-worker is running more than one daemon pod Mar 18 13:45:49.589: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 13:45:49.594: INFO: Number of nodes with available pods: 0 Mar 18 13:45:49.594: INFO: Node iruya-worker is running more than one daemon pod Mar 18 13:45:50.579: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 13:45:50.583: INFO: Number of nodes with available pods: 2 Mar 18 13:45:50.583: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 18 13:45:50.643: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 13:45:50.648: INFO: Number of nodes with available pods: 1 Mar 18 13:45:50.648: INFO: Node iruya-worker is running more than one daemon pod Mar 18 13:45:51.654: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 13:45:51.657: INFO: Number of nodes with available pods: 1 Mar 18 13:45:51.657: INFO: Node iruya-worker is running more than one daemon pod Mar 18 13:45:52.654: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 13:45:52.657: INFO: Number of nodes with available pods: 1 Mar 18 13:45:52.657: INFO: Node iruya-worker is running more than one daemon pod Mar 18 13:45:53.653: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 13:45:53.656: INFO: Number of nodes with available pods: 2 Mar 18 13:45:53.656: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9039, will wait for the garbage collector to delete the pods Mar 18 13:45:53.722: INFO: Deleting DaemonSet.extensions daemon-set took: 6.351813ms Mar 18 13:45:54.022: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.263938ms Mar 18 13:46:02.226: INFO: Number of nodes with available pods: 0 Mar 18 13:46:02.226: INFO: Number of running nodes: 0, number of available pods: 0 Mar 18 13:46:02.229: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9039/daemonsets","resourceVersion":"523288"},"items":null} Mar 18 13:46:02.231: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9039/pods","resourceVersion":"523288"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:46:02.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9039" for this suite. Mar 18 13:46:08.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:46:08.331: INFO: namespace daemonsets-9039 deletion completed in 6.089157072s • [SLOW TEST:22.200 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:46:08.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-zszn STEP: Creating a pod to test atomic-volume-subpath Mar 18 13:46:08.422: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zszn" in namespace "subpath-8025" to be "success or failure" Mar 18 13:46:08.426: INFO: Pod "pod-subpath-test-downwardapi-zszn": Phase="Pending", Reason="", readiness=false. Elapsed: 3.756544ms Mar 18 13:46:10.430: INFO: Pod "pod-subpath-test-downwardapi-zszn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007812333s Mar 18 13:46:12.434: INFO: Pod "pod-subpath-test-downwardapi-zszn": Phase="Running", Reason="", readiness=true. Elapsed: 4.011610756s Mar 18 13:46:14.438: INFO: Pod "pod-subpath-test-downwardapi-zszn": Phase="Running", Reason="", readiness=true. Elapsed: 6.015568792s Mar 18 13:46:16.442: INFO: Pod "pod-subpath-test-downwardapi-zszn": Phase="Running", Reason="", readiness=true. Elapsed: 8.019783463s Mar 18 13:46:18.446: INFO: Pod "pod-subpath-test-downwardapi-zszn": Phase="Running", Reason="", readiness=true. Elapsed: 10.024025243s Mar 18 13:46:20.451: INFO: Pod "pod-subpath-test-downwardapi-zszn": Phase="Running", Reason="", readiness=true. Elapsed: 12.028623865s Mar 18 13:46:22.455: INFO: Pod "pod-subpath-test-downwardapi-zszn": Phase="Running", Reason="", readiness=true. Elapsed: 14.032674331s Mar 18 13:46:24.457: INFO: Pod "pod-subpath-test-downwardapi-zszn": Phase="Running", Reason="", readiness=true. Elapsed: 16.035304263s Mar 18 13:46:26.463: INFO: Pod "pod-subpath-test-downwardapi-zszn": Phase="Running", Reason="", readiness=true. Elapsed: 18.041236386s Mar 18 13:46:28.467: INFO: Pod "pod-subpath-test-downwardapi-zszn": Phase="Running", Reason="", readiness=true. Elapsed: 20.045222239s Mar 18 13:46:30.472: INFO: Pod "pod-subpath-test-downwardapi-zszn": Phase="Running", Reason="", readiness=true. Elapsed: 22.04947744s Mar 18 13:46:32.476: INFO: Pod "pod-subpath-test-downwardapi-zszn": Phase="Running", Reason="", readiness=true. Elapsed: 24.05370736s Mar 18 13:46:34.480: INFO: Pod "pod-subpath-test-downwardapi-zszn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.057704958s STEP: Saw pod success Mar 18 13:46:34.480: INFO: Pod "pod-subpath-test-downwardapi-zszn" satisfied condition "success or failure" Mar 18 13:46:34.483: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-zszn container test-container-subpath-downwardapi-zszn: STEP: delete the pod Mar 18 13:46:34.565: INFO: Waiting for pod pod-subpath-test-downwardapi-zszn to disappear Mar 18 13:46:34.595: INFO: Pod pod-subpath-test-downwardapi-zszn no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-zszn Mar 18 13:46:34.595: INFO: Deleting pod "pod-subpath-test-downwardapi-zszn" in namespace "subpath-8025" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:46:34.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8025" for this suite. Mar 18 13:46:40.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:46:40.725: INFO: namespace subpath-8025 deletion completed in 6.125917272s • [SLOW TEST:32.394 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:46:40.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-48a78512-bfdb-4fc5-a5d7-39f09ca80730 STEP: Creating a pod to test consume secrets Mar 18 13:46:40.814: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a92b6bbb-3d47-4288-be84-fd2ba6a9966d" in namespace "projected-1804" to be "success or failure" Mar 18 13:46:40.822: INFO: Pod "pod-projected-secrets-a92b6bbb-3d47-4288-be84-fd2ba6a9966d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.718497ms Mar 18 13:46:42.826: INFO: Pod "pod-projected-secrets-a92b6bbb-3d47-4288-be84-fd2ba6a9966d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01156433s Mar 18 13:46:44.830: INFO: Pod "pod-projected-secrets-a92b6bbb-3d47-4288-be84-fd2ba6a9966d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0160256s STEP: Saw pod success Mar 18 13:46:44.831: INFO: Pod "pod-projected-secrets-a92b6bbb-3d47-4288-be84-fd2ba6a9966d" satisfied condition "success or failure" Mar 18 13:46:44.834: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-a92b6bbb-3d47-4288-be84-fd2ba6a9966d container projected-secret-volume-test: STEP: delete the pod Mar 18 13:46:44.866: INFO: Waiting for pod pod-projected-secrets-a92b6bbb-3d47-4288-be84-fd2ba6a9966d to disappear Mar 18 13:46:44.876: INFO: Pod pod-projected-secrets-a92b6bbb-3d47-4288-be84-fd2ba6a9966d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:46:44.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1804" for this suite. Mar 18 13:46:50.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:46:50.993: INFO: namespace projected-1804 deletion completed in 6.114101512s • [SLOW TEST:10.268 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:46:50.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 18 13:46:59.129: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 13:46:59.134: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 13:47:01.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 13:47:01.137: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 13:47:03.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 13:47:03.138: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 13:47:05.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 13:47:05.138: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 13:47:07.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 13:47:07.138: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 13:47:09.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 13:47:09.151: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 13:47:11.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 13:47:11.137: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 13:47:13.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 13:47:13.140: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 13:47:15.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 13:47:15.138: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 13:47:17.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 13:47:17.139: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 13:47:19.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 13:47:19.139: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 13:47:21.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 13:47:21.138: INFO: Pod pod-with-poststart-exec-hook still exists Mar 18 13:47:23.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 18 13:47:23.138: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:47:23.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5479" for this suite. Mar 18 13:47:45.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:47:45.234: INFO: namespace container-lifecycle-hook-5479 deletion completed in 22.091615524s • [SLOW TEST:54.240 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:47:45.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-7e916c32-7dd9-423f-94b4-fd26ff641d99 STEP: Creating a pod to test consume configMaps Mar 18 13:47:45.330: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8bc7c709-b04b-465f-b6c6-8a160ca04137" in namespace "projected-5684" to be "success or failure" Mar 18 13:47:45.333: INFO: Pod "pod-projected-configmaps-8bc7c709-b04b-465f-b6c6-8a160ca04137": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259224ms Mar 18 13:47:47.385: INFO: Pod "pod-projected-configmaps-8bc7c709-b04b-465f-b6c6-8a160ca04137": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054857986s Mar 18 13:47:49.396: INFO: Pod "pod-projected-configmaps-8bc7c709-b04b-465f-b6c6-8a160ca04137": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065526411s STEP: Saw pod success Mar 18 13:47:49.396: INFO: Pod "pod-projected-configmaps-8bc7c709-b04b-465f-b6c6-8a160ca04137" satisfied condition "success or failure" Mar 18 13:47:49.398: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-8bc7c709-b04b-465f-b6c6-8a160ca04137 container projected-configmap-volume-test: STEP: delete the pod Mar 18 13:47:49.447: INFO: Waiting for pod pod-projected-configmaps-8bc7c709-b04b-465f-b6c6-8a160ca04137 to disappear Mar 18 13:47:49.485: INFO: Pod pod-projected-configmaps-8bc7c709-b04b-465f-b6c6-8a160ca04137 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:47:49.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5684" for this suite. Mar 18 13:47:55.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:47:55.578: INFO: namespace projected-5684 deletion completed in 6.088970332s • [SLOW TEST:10.344 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:47:55.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 18 13:47:55.674: INFO: Waiting up to 5m0s for pod "pod-8650f848-285f-4f82-b64f-08a1efc1e500" in namespace "emptydir-6832" to be "success or failure" Mar 18 13:47:55.686: INFO: Pod "pod-8650f848-285f-4f82-b64f-08a1efc1e500": Phase="Pending", Reason="", readiness=false. Elapsed: 11.693553ms Mar 18 13:47:57.690: INFO: Pod "pod-8650f848-285f-4f82-b64f-08a1efc1e500": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01558004s Mar 18 13:47:59.694: INFO: Pod "pod-8650f848-285f-4f82-b64f-08a1efc1e500": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019909221s STEP: Saw pod success Mar 18 13:47:59.694: INFO: Pod "pod-8650f848-285f-4f82-b64f-08a1efc1e500" satisfied condition "success or failure" Mar 18 13:47:59.698: INFO: Trying to get logs from node iruya-worker2 pod pod-8650f848-285f-4f82-b64f-08a1efc1e500 container test-container: STEP: delete the pod Mar 18 13:47:59.734: INFO: Waiting for pod pod-8650f848-285f-4f82-b64f-08a1efc1e500 to disappear Mar 18 13:47:59.746: INFO: Pod pod-8650f848-285f-4f82-b64f-08a1efc1e500 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:47:59.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6832" for this suite. Mar 18 13:48:05.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:48:05.844: INFO: namespace emptydir-6832 deletion completed in 6.093456466s • [SLOW TEST:10.266 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:48:05.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 18 13:48:05.910: INFO: Waiting up to 5m0s for pod "downward-api-e97c3615-4070-4ff3-b4d6-6d2aad8b34eb" in namespace "downward-api-249" to be "success or failure" Mar 18 13:48:05.955: INFO: Pod "downward-api-e97c3615-4070-4ff3-b4d6-6d2aad8b34eb": Phase="Pending", Reason="", readiness=false. Elapsed: 45.319532ms Mar 18 13:48:07.959: INFO: Pod "downward-api-e97c3615-4070-4ff3-b4d6-6d2aad8b34eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049891298s Mar 18 13:48:09.964: INFO: Pod "downward-api-e97c3615-4070-4ff3-b4d6-6d2aad8b34eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054327256s STEP: Saw pod success Mar 18 13:48:09.964: INFO: Pod "downward-api-e97c3615-4070-4ff3-b4d6-6d2aad8b34eb" satisfied condition "success or failure" Mar 18 13:48:09.967: INFO: Trying to get logs from node iruya-worker pod downward-api-e97c3615-4070-4ff3-b4d6-6d2aad8b34eb container dapi-container: STEP: delete the pod Mar 18 13:48:09.987: INFO: Waiting for pod downward-api-e97c3615-4070-4ff3-b4d6-6d2aad8b34eb to disappear Mar 18 13:48:09.991: INFO: Pod downward-api-e97c3615-4070-4ff3-b4d6-6d2aad8b34eb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:48:09.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-249" for this suite. Mar 18 13:48:16.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:48:16.084: INFO: namespace downward-api-249 deletion completed in 6.089742991s • [SLOW TEST:10.239 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:48:16.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 18 13:48:16.138: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Mar 18 13:48:16.585: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 18 13:48:18.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136096, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136096, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136096, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136096, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 13:48:21.330: INFO: Waited 621.914175ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:48:21.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9043" for this suite. Mar 18 13:48:27.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:48:28.032: INFO: namespace aggregator-9043 deletion completed in 6.266514031s • [SLOW TEST:11.948 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:48:28.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 18 13:48:32.661: INFO: Successfully updated pod "annotationupdate243c572a-b6e7-4ce7-a009-62bb207432c6" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:48:34.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3545" for this suite. Mar 18 13:48:56.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:48:56.772: INFO: namespace downward-api-3545 deletion completed in 22.094114634s • [SLOW TEST:28.740 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:48:56.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0318 13:49:27.396610 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 13:49:27.396: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:49:27.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8821" for this suite. Mar 18 13:49:33.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:49:33.482: INFO: namespace gc-8821 deletion completed in 6.082097619s • [SLOW TEST:36.710 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:49:33.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 18 13:49:33.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-7906' Mar 18 13:49:35.833: INFO: stderr: "" Mar 18 13:49:35.833: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 18 13:49:40.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-7906 -o json' Mar 18 13:49:40.983: INFO: stderr: "" Mar 18 13:49:40.983: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-18T13:49:35Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-7906\",\n \"resourceVersion\": \"524074\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7906/pods/e2e-test-nginx-pod\",\n \"uid\": \"6b6f2e6f-2e44-4837-8f2a-f6f9a0e69559\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-snb5h\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-snb5h\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-snb5h\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-18T13:49:35Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-18T13:49:38Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-18T13:49:38Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-18T13:49:35Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://3141075e695c7e7e32406725f53a52afdbc8fe8fba1d52d9e379ff07441b4acb\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-18T13:49:38Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.51\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-18T13:49:35Z\"\n }\n}\n" STEP: replace the image in the pod Mar 18 13:49:40.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7906' Mar 18 13:49:41.302: INFO: stderr: "" Mar 18 13:49:41.302: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Mar 18 13:49:41.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7906' Mar 18 13:49:44.356: INFO: stderr: "" Mar 18 13:49:44.356: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:49:44.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7906" for this suite. Mar 18 13:49:50.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:49:50.462: INFO: namespace kubectl-7906 deletion completed in 6.086620586s • [SLOW TEST:16.979 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:49:50.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 13:49:50.517: INFO: Creating ReplicaSet my-hostname-basic-ad69f99d-e91b-479d-88fc-09ef597ed73e Mar 18 13:49:50.544: INFO: Pod name my-hostname-basic-ad69f99d-e91b-479d-88fc-09ef597ed73e: Found 0 pods out of 1 Mar 18 13:49:55.549: INFO: Pod name my-hostname-basic-ad69f99d-e91b-479d-88fc-09ef597ed73e: Found 1 pods out of 1 Mar 18 13:49:55.549: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ad69f99d-e91b-479d-88fc-09ef597ed73e" is running Mar 18 13:49:55.552: INFO: Pod "my-hostname-basic-ad69f99d-e91b-479d-88fc-09ef597ed73e-dv2r7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 13:49:50 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 13:49:53 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 13:49:53 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 13:49:50 +0000 UTC Reason: Message:}]) Mar 18 13:49:55.552: INFO: Trying to dial the pod Mar 18 13:50:00.564: INFO: Controller my-hostname-basic-ad69f99d-e91b-479d-88fc-09ef597ed73e: Got expected result from replica 1 [my-hostname-basic-ad69f99d-e91b-479d-88fc-09ef597ed73e-dv2r7]: "my-hostname-basic-ad69f99d-e91b-479d-88fc-09ef597ed73e-dv2r7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:50:00.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1902" for this suite. Mar 18 13:50:06.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:50:06.667: INFO: namespace replicaset-1902 deletion completed in 6.099285634s • [SLOW TEST:16.205 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:50:06.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 13:50:10.798: INFO: Waiting up to 5m0s for pod "client-envvars-07c62225-1fb9-45bf-b564-f9758ef1507f" in namespace "pods-9947" to be "success or failure" Mar 18 13:50:10.804: INFO: Pod "client-envvars-07c62225-1fb9-45bf-b564-f9758ef1507f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.929732ms Mar 18 13:50:12.808: INFO: Pod "client-envvars-07c62225-1fb9-45bf-b564-f9758ef1507f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010130562s Mar 18 13:50:14.812: INFO: Pod "client-envvars-07c62225-1fb9-45bf-b564-f9758ef1507f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01423096s STEP: Saw pod success Mar 18 13:50:14.812: INFO: Pod "client-envvars-07c62225-1fb9-45bf-b564-f9758ef1507f" satisfied condition "success or failure" Mar 18 13:50:14.815: INFO: Trying to get logs from node iruya-worker pod client-envvars-07c62225-1fb9-45bf-b564-f9758ef1507f container env3cont: STEP: delete the pod Mar 18 13:50:14.846: INFO: Waiting for pod client-envvars-07c62225-1fb9-45bf-b564-f9758ef1507f to disappear Mar 18 13:50:14.857: INFO: Pod client-envvars-07c62225-1fb9-45bf-b564-f9758ef1507f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:50:14.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9947" for this suite. Mar 18 13:50:52.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:50:52.975: INFO: namespace pods-9947 deletion completed in 38.114258969s • [SLOW TEST:46.307 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:50:52.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 13:50:53.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-230c64e9-4af6-4df8-af16-7dff3c4414f1" in namespace "projected-9115" to be "success or failure" Mar 18 13:50:53.047: INFO: Pod "downwardapi-volume-230c64e9-4af6-4df8-af16-7dff3c4414f1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.555915ms Mar 18 13:50:55.051: INFO: Pod "downwardapi-volume-230c64e9-4af6-4df8-af16-7dff3c4414f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0186587s Mar 18 13:50:57.055: INFO: Pod "downwardapi-volume-230c64e9-4af6-4df8-af16-7dff3c4414f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022501716s STEP: Saw pod success Mar 18 13:50:57.055: INFO: Pod "downwardapi-volume-230c64e9-4af6-4df8-af16-7dff3c4414f1" satisfied condition "success or failure" Mar 18 13:50:57.058: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-230c64e9-4af6-4df8-af16-7dff3c4414f1 container client-container: STEP: delete the pod Mar 18 13:50:57.108: INFO: Waiting for pod downwardapi-volume-230c64e9-4af6-4df8-af16-7dff3c4414f1 to disappear Mar 18 13:50:57.114: INFO: Pod downwardapi-volume-230c64e9-4af6-4df8-af16-7dff3c4414f1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:50:57.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9115" for this suite. Mar 18 13:51:03.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:51:03.213: INFO: namespace projected-9115 deletion completed in 6.094738956s • [SLOW TEST:10.238 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:51:03.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 18 13:51:03.275: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4068,SelfLink:/api/v1/namespaces/watch-4068/configmaps/e2e-watch-test-configmap-a,UID:60efbc9f-47ce-4de7-be86-918fe868301e,ResourceVersion:524359,Generation:0,CreationTimestamp:2020-03-18 13:51:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 13:51:03.275: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4068,SelfLink:/api/v1/namespaces/watch-4068/configmaps/e2e-watch-test-configmap-a,UID:60efbc9f-47ce-4de7-be86-918fe868301e,ResourceVersion:524359,Generation:0,CreationTimestamp:2020-03-18 13:51:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 18 13:51:13.287: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4068,SelfLink:/api/v1/namespaces/watch-4068/configmaps/e2e-watch-test-configmap-a,UID:60efbc9f-47ce-4de7-be86-918fe868301e,ResourceVersion:524380,Generation:0,CreationTimestamp:2020-03-18 13:51:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 18 13:51:13.287: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4068,SelfLink:/api/v1/namespaces/watch-4068/configmaps/e2e-watch-test-configmap-a,UID:60efbc9f-47ce-4de7-be86-918fe868301e,ResourceVersion:524380,Generation:0,CreationTimestamp:2020-03-18 13:51:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 18 13:51:23.296: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4068,SelfLink:/api/v1/namespaces/watch-4068/configmaps/e2e-watch-test-configmap-a,UID:60efbc9f-47ce-4de7-be86-918fe868301e,ResourceVersion:524400,Generation:0,CreationTimestamp:2020-03-18 13:51:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 13:51:23.296: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4068,SelfLink:/api/v1/namespaces/watch-4068/configmaps/e2e-watch-test-configmap-a,UID:60efbc9f-47ce-4de7-be86-918fe868301e,ResourceVersion:524400,Generation:0,CreationTimestamp:2020-03-18 13:51:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 18 13:51:33.304: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4068,SelfLink:/api/v1/namespaces/watch-4068/configmaps/e2e-watch-test-configmap-a,UID:60efbc9f-47ce-4de7-be86-918fe868301e,ResourceVersion:524421,Generation:0,CreationTimestamp:2020-03-18 13:51:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 18 13:51:33.304: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4068,SelfLink:/api/v1/namespaces/watch-4068/configmaps/e2e-watch-test-configmap-a,UID:60efbc9f-47ce-4de7-be86-918fe868301e,ResourceVersion:524421,Generation:0,CreationTimestamp:2020-03-18 13:51:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 18 13:51:43.312: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4068,SelfLink:/api/v1/namespaces/watch-4068/configmaps/e2e-watch-test-configmap-b,UID:b388c127-2d63-48c0-947d-786841a82572,ResourceVersion:524442,Generation:0,CreationTimestamp:2020-03-18 13:51:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 13:51:43.312: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4068,SelfLink:/api/v1/namespaces/watch-4068/configmaps/e2e-watch-test-configmap-b,UID:b388c127-2d63-48c0-947d-786841a82572,ResourceVersion:524442,Generation:0,CreationTimestamp:2020-03-18 13:51:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 18 13:51:53.318: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4068,SelfLink:/api/v1/namespaces/watch-4068/configmaps/e2e-watch-test-configmap-b,UID:b388c127-2d63-48c0-947d-786841a82572,ResourceVersion:524462,Generation:0,CreationTimestamp:2020-03-18 13:51:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 18 13:51:53.319: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4068,SelfLink:/api/v1/namespaces/watch-4068/configmaps/e2e-watch-test-configmap-b,UID:b388c127-2d63-48c0-947d-786841a82572,ResourceVersion:524462,Generation:0,CreationTimestamp:2020-03-18 13:51:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:52:03.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4068" for this suite. Mar 18 13:52:09.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:52:09.419: INFO: namespace watch-4068 deletion completed in 6.095786472s • [SLOW TEST:66.206 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:52:09.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5119.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5119.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5119.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5119.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5119.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5119.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5119.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5119.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5119.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5119.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5119.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 197.9.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.9.197_udp@PTR;check="$$(dig +tcp +noall +answer +search 197.9.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.9.197_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5119.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5119.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5119.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5119.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5119.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5119.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5119.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5119.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5119.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5119.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5119.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 197.9.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.9.197_udp@PTR;check="$$(dig +tcp +noall +answer +search 197.9.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.9.197_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 13:52:15.611: INFO: Unable to read wheezy_udp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:15.615: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:15.618: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:15.621: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:15.643: INFO: Unable to read jessie_udp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:15.647: INFO: Unable to read jessie_tcp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:15.650: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:15.653: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:15.673: INFO: Lookups using dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139 failed for: [wheezy_udp@dns-test-service.dns-5119.svc.cluster.local wheezy_tcp@dns-test-service.dns-5119.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local jessie_udp@dns-test-service.dns-5119.svc.cluster.local jessie_tcp@dns-test-service.dns-5119.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local] Mar 18 13:52:20.678: INFO: Unable to read wheezy_udp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:20.681: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:20.685: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:20.688: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:20.711: INFO: Unable to read jessie_udp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:20.720: INFO: Unable to read jessie_tcp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:20.726: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:20.730: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:20.749: INFO: Lookups using dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139 failed for: [wheezy_udp@dns-test-service.dns-5119.svc.cluster.local wheezy_tcp@dns-test-service.dns-5119.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local jessie_udp@dns-test-service.dns-5119.svc.cluster.local jessie_tcp@dns-test-service.dns-5119.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local] Mar 18 13:52:25.679: INFO: Unable to read wheezy_udp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:25.682: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:25.686: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:25.689: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:25.711: INFO: Unable to read jessie_udp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:25.714: INFO: Unable to read jessie_tcp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:25.717: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:25.720: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:25.738: INFO: Lookups using dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139 failed for: [wheezy_udp@dns-test-service.dns-5119.svc.cluster.local wheezy_tcp@dns-test-service.dns-5119.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local jessie_udp@dns-test-service.dns-5119.svc.cluster.local jessie_tcp@dns-test-service.dns-5119.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local] Mar 18 13:52:30.682: INFO: Unable to read wheezy_udp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:30.701: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:30.705: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:30.708: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:30.729: INFO: Unable to read jessie_udp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:30.731: INFO: Unable to read jessie_tcp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:30.735: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:30.737: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:30.756: INFO: Lookups using dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139 failed for: [wheezy_udp@dns-test-service.dns-5119.svc.cluster.local wheezy_tcp@dns-test-service.dns-5119.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local jessie_udp@dns-test-service.dns-5119.svc.cluster.local jessie_tcp@dns-test-service.dns-5119.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local] Mar 18 13:52:35.678: INFO: Unable to read wheezy_udp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:35.683: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:35.687: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:35.690: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:35.715: INFO: Unable to read jessie_udp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:35.717: INFO: Unable to read jessie_tcp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:35.719: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:35.722: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:35.737: INFO: Lookups using dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139 failed for: [wheezy_udp@dns-test-service.dns-5119.svc.cluster.local wheezy_tcp@dns-test-service.dns-5119.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local jessie_udp@dns-test-service.dns-5119.svc.cluster.local jessie_tcp@dns-test-service.dns-5119.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local] Mar 18 13:52:40.678: INFO: Unable to read wheezy_udp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:40.681: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:40.685: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:40.689: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:40.712: INFO: Unable to read jessie_udp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:40.721: INFO: Unable to read jessie_tcp@dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:40.724: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:40.727: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local from pod dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139: the server could not find the requested resource (get pods dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139) Mar 18 13:52:40.745: INFO: Lookups using dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139 failed for: [wheezy_udp@dns-test-service.dns-5119.svc.cluster.local wheezy_tcp@dns-test-service.dns-5119.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local jessie_udp@dns-test-service.dns-5119.svc.cluster.local jessie_tcp@dns-test-service.dns-5119.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5119.svc.cluster.local] Mar 18 13:52:45.730: INFO: DNS probes using dns-5119/dns-test-01e999df-6c88-47bb-a62d-1a84a0b4d139 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:52:46.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5119" for this suite. Mar 18 13:52:52.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:52:52.432: INFO: namespace dns-5119 deletion completed in 6.112634381s • [SLOW TEST:43.012 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:52:52.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 18 13:53:00.576: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 13:53:00.579: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 13:53:02.580: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 13:53:02.584: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 13:53:04.580: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 13:53:04.583: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 13:53:06.580: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 13:53:06.583: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 13:53:08.579: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 13:53:08.583: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 13:53:10.580: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 13:53:10.599: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 13:53:12.580: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 13:53:12.583: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 13:53:14.580: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 13:53:14.583: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 13:53:16.580: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 13:53:16.594: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 13:53:18.580: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 13:53:18.583: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 13:53:20.579: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 13:53:20.592: INFO: Pod pod-with-prestop-exec-hook still exists Mar 18 13:53:22.580: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 18 13:53:22.583: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:53:22.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4413" for this suite. Mar 18 13:53:44.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:53:44.753: INFO: namespace container-lifecycle-hook-4413 deletion completed in 22.154410994s • [SLOW TEST:52.321 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:53:44.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0318 13:53:55.616954 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 13:53:55.617: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:53:55.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6520" for this suite. Mar 18 13:54:03.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:54:03.708: INFO: namespace gc-6520 deletion completed in 8.08708025s • [SLOW TEST:18.955 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:54:03.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 18 13:54:06.804: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:54:06.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3007" for this suite. Mar 18 13:54:12.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:54:12.962: INFO: namespace container-runtime-3007 deletion completed in 6.114571572s • [SLOW TEST:9.253 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:54:12.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5809.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5809.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5809.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5809.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5809.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5809.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 13:54:19.079: INFO: DNS probes using dns-5809/dns-test-19432c57-b6ad-4c2b-8df2-334459030415 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:54:19.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5809" for this suite. Mar 18 13:54:25.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:54:25.218: INFO: namespace dns-5809 deletion completed in 6.106226669s • [SLOW TEST:12.255 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:54:25.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8efba6e0-96e1-48ac-9879-cb9d2869e30c STEP: Creating a pod to test consume secrets Mar 18 13:54:25.300: INFO: Waiting up to 5m0s for pod "pod-secrets-a49b229c-f657-484f-83b8-a137184e3b62" in namespace "secrets-4272" to be "success or failure" Mar 18 13:54:25.304: INFO: Pod "pod-secrets-a49b229c-f657-484f-83b8-a137184e3b62": Phase="Pending", Reason="", readiness=false. Elapsed: 3.614525ms Mar 18 13:54:27.355: INFO: Pod "pod-secrets-a49b229c-f657-484f-83b8-a137184e3b62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054485883s Mar 18 13:54:29.359: INFO: Pod "pod-secrets-a49b229c-f657-484f-83b8-a137184e3b62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058935398s STEP: Saw pod success Mar 18 13:54:29.360: INFO: Pod "pod-secrets-a49b229c-f657-484f-83b8-a137184e3b62" satisfied condition "success or failure" Mar 18 13:54:29.362: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-a49b229c-f657-484f-83b8-a137184e3b62 container secret-volume-test: STEP: delete the pod Mar 18 13:54:29.434: INFO: Waiting for pod pod-secrets-a49b229c-f657-484f-83b8-a137184e3b62 to disappear Mar 18 13:54:29.453: INFO: Pod pod-secrets-a49b229c-f657-484f-83b8-a137184e3b62 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:54:29.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4272" for this suite. Mar 18 13:54:35.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:54:35.560: INFO: namespace secrets-4272 deletion completed in 6.102779224s • [SLOW TEST:10.341 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:54:35.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Mar 18 13:54:35.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8002 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 18 13:54:38.379: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0318 13:54:38.308500 2211 log.go:172] (0xc000a8c790) (0xc0003dc5a0) Create stream\nI0318 13:54:38.308548 2211 log.go:172] (0xc000a8c790) (0xc0003dc5a0) Stream added, broadcasting: 1\nI0318 13:54:38.310728 2211 log.go:172] (0xc000a8c790) Reply frame received for 1\nI0318 13:54:38.310780 2211 log.go:172] (0xc000a8c790) (0xc000676000) Create stream\nI0318 13:54:38.310794 2211 log.go:172] (0xc000a8c790) (0xc000676000) Stream added, broadcasting: 3\nI0318 13:54:38.311668 2211 log.go:172] (0xc000a8c790) Reply frame received for 3\nI0318 13:54:38.311702 2211 log.go:172] (0xc000a8c790) (0xc0003dc640) Create stream\nI0318 13:54:38.311713 2211 log.go:172] (0xc000a8c790) (0xc0003dc640) Stream added, broadcasting: 5\nI0318 13:54:38.312495 2211 log.go:172] (0xc000a8c790) Reply frame received for 5\nI0318 13:54:38.312528 2211 log.go:172] (0xc000a8c790) (0xc0006760a0) Create stream\nI0318 13:54:38.312541 2211 log.go:172] (0xc000a8c790) (0xc0006760a0) Stream added, broadcasting: 7\nI0318 13:54:38.313558 2211 log.go:172] (0xc000a8c790) Reply frame received for 7\nI0318 13:54:38.313734 2211 log.go:172] (0xc000676000) (3) Writing data frame\nI0318 13:54:38.313874 2211 log.go:172] (0xc000676000) (3) Writing data frame\nI0318 13:54:38.314652 2211 log.go:172] (0xc000a8c790) Data frame received for 5\nI0318 13:54:38.314676 2211 log.go:172] (0xc0003dc640) (5) Data frame handling\nI0318 13:54:38.314690 2211 log.go:172] (0xc0003dc640) (5) Data frame sent\nI0318 13:54:38.315014 2211 log.go:172] (0xc000a8c790) Data frame received for 5\nI0318 13:54:38.315031 2211 log.go:172] (0xc0003dc640) (5) Data frame handling\nI0318 13:54:38.315045 2211 log.go:172] (0xc0003dc640) (5) Data frame sent\nI0318 13:54:38.355330 2211 log.go:172] (0xc000a8c790) Data frame received for 5\nI0318 13:54:38.355357 2211 log.go:172] (0xc0003dc640) (5) Data frame handling\nI0318 13:54:38.355398 2211 log.go:172] (0xc000a8c790) Data frame received for 7\nI0318 13:54:38.355444 2211 log.go:172] (0xc0006760a0) (7) Data frame handling\nI0318 13:54:38.355948 2211 log.go:172] (0xc000a8c790) Data frame received for 1\nI0318 13:54:38.355979 2211 log.go:172] (0xc0003dc5a0) (1) Data frame handling\nI0318 13:54:38.355993 2211 log.go:172] (0xc0003dc5a0) (1) Data frame sent\nI0318 13:54:38.356007 2211 log.go:172] (0xc000a8c790) (0xc0003dc5a0) Stream removed, broadcasting: 1\nI0318 13:54:38.356043 2211 log.go:172] (0xc000a8c790) (0xc000676000) Stream removed, broadcasting: 3\nI0318 13:54:38.356099 2211 log.go:172] (0xc000a8c790) Go away received\nI0318 13:54:38.356154 2211 log.go:172] (0xc000a8c790) (0xc0003dc5a0) Stream removed, broadcasting: 1\nI0318 13:54:38.356216 2211 log.go:172] (0xc000a8c790) (0xc000676000) Stream removed, broadcasting: 3\nI0318 13:54:38.356248 2211 log.go:172] (0xc000a8c790) (0xc0003dc640) Stream removed, broadcasting: 5\nI0318 13:54:38.356273 2211 log.go:172] (0xc000a8c790) (0xc0006760a0) Stream removed, broadcasting: 7\n" Mar 18 13:54:38.379: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:54:40.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8002" for this suite. Mar 18 13:54:46.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:54:46.484: INFO: namespace kubectl-8002 deletion completed in 6.095014159s • [SLOW TEST:10.924 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:54:46.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 18 13:54:51.070: INFO: Successfully updated pod "labelsupdate2f63d7ee-4ea1-475b-bd0f-6640ee9b0581" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:54:53.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6173" for this suite. Mar 18 13:55:15.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:55:15.220: INFO: namespace projected-6173 deletion completed in 22.11037498s • [SLOW TEST:28.736 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:55:15.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 18 13:55:19.313: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-b2bfe0c9-3def-4daf-a14b-9009a8a3ee4a,GenerateName:,Namespace:events-1748,SelfLink:/api/v1/namespaces/events-1748/pods/send-events-b2bfe0c9-3def-4daf-a14b-9009a8a3ee4a,UID:5c8143f5-1b77-4f06-9950-39b035615960,ResourceVersion:525307,Generation:0,CreationTimestamp:2020-03-18 13:55:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 285630400,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bbd45 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bbd45,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-bbd45 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a3e370} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a3e390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 13:55:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 13:55:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 13:55:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 13:55:15 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.80,StartTime:2020-03-18 13:55:15 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-18 13:55:17 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://cb4772ed583cefc674164dbf4b7615b407a9e7c9c933ea3d5f519f20f2304db7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 18 13:55:21.319: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 18 13:55:23.324: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:55:23.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1748" for this suite. Mar 18 13:56:03.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:56:03.461: INFO: namespace events-1748 deletion completed in 40.121872135s • [SLOW TEST:48.240 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:56:03.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-476c2079-7aee-4815-ac15-dbd4ccb69b62 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-476c2079-7aee-4815-ac15-dbd4ccb69b62 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:56:09.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8785" for this suite. Mar 18 13:56:31.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:56:31.681: INFO: namespace projected-8785 deletion completed in 22.09906154s • [SLOW TEST:28.220 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:56:31.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:57:00.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8575" for this suite. Mar 18 13:57:06.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:57:06.411: INFO: namespace container-runtime-8575 deletion completed in 6.087611432s • [SLOW TEST:34.730 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:57:06.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-b3242990-1216-453f-9660-4d92ab056643 STEP: Creating a pod to test consume secrets Mar 18 13:57:06.503: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d52ef7b0-a2e9-44ae-ae8e-cf59a8a215f0" in namespace "projected-8658" to be "success or failure" Mar 18 13:57:06.507: INFO: Pod "pod-projected-secrets-d52ef7b0-a2e9-44ae-ae8e-cf59a8a215f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087376ms Mar 18 13:57:08.511: INFO: Pod "pod-projected-secrets-d52ef7b0-a2e9-44ae-ae8e-cf59a8a215f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008373971s Mar 18 13:57:10.515: INFO: Pod "pod-projected-secrets-d52ef7b0-a2e9-44ae-ae8e-cf59a8a215f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012556346s STEP: Saw pod success Mar 18 13:57:10.515: INFO: Pod "pod-projected-secrets-d52ef7b0-a2e9-44ae-ae8e-cf59a8a215f0" satisfied condition "success or failure" Mar 18 13:57:10.519: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-d52ef7b0-a2e9-44ae-ae8e-cf59a8a215f0 container projected-secret-volume-test: STEP: delete the pod Mar 18 13:57:10.543: INFO: Waiting for pod pod-projected-secrets-d52ef7b0-a2e9-44ae-ae8e-cf59a8a215f0 to disappear Mar 18 13:57:10.546: INFO: Pod pod-projected-secrets-d52ef7b0-a2e9-44ae-ae8e-cf59a8a215f0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:57:10.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8658" for this suite. Mar 18 13:57:16.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:57:16.637: INFO: namespace projected-8658 deletion completed in 6.088264258s • [SLOW TEST:10.225 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:57:16.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 13:57:16.707: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.276668ms) Mar 18 13:57:16.710: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.72464ms) Mar 18 13:57:16.718: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 7.573643ms) Mar 18 13:57:16.721: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.448661ms) Mar 18 13:57:16.724: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.971759ms) Mar 18 13:57:16.727: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.012781ms) Mar 18 13:57:16.730: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.656532ms) Mar 18 13:57:16.732: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.457173ms) Mar 18 13:57:16.735: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.76668ms) Mar 18 13:57:16.738: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.429169ms) Mar 18 13:57:16.740: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.504039ms) Mar 18 13:57:16.743: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.173513ms) Mar 18 13:57:16.746: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.808293ms) Mar 18 13:57:16.748: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.131211ms) Mar 18 13:57:16.751: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.713909ms) Mar 18 13:57:16.755: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.687254ms) Mar 18 13:57:16.758: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.421685ms) Mar 18 13:57:16.761: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.857286ms) Mar 18 13:57:16.764: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.840432ms) Mar 18 13:57:16.767: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.93775ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:57:16.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6290" for this suite. Mar 18 13:57:22.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:57:22.863: INFO: namespace proxy-6290 deletion completed in 6.093074048s • [SLOW TEST:6.225 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:57:22.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-cc73ca30-1c19-4b60-9e3a-8858327d1441 STEP: Creating a pod to test consume secrets Mar 18 13:57:22.953: INFO: Waiting up to 5m0s for pod "pod-secrets-d3306a25-3457-40bd-8625-301c973ea953" in namespace "secrets-1406" to be "success or failure" Mar 18 13:57:22.992: INFO: Pod "pod-secrets-d3306a25-3457-40bd-8625-301c973ea953": Phase="Pending", Reason="", readiness=false. Elapsed: 38.759039ms Mar 18 13:57:24.996: INFO: Pod "pod-secrets-d3306a25-3457-40bd-8625-301c973ea953": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043109946s Mar 18 13:57:27.004: INFO: Pod "pod-secrets-d3306a25-3457-40bd-8625-301c973ea953": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050731977s STEP: Saw pod success Mar 18 13:57:27.004: INFO: Pod "pod-secrets-d3306a25-3457-40bd-8625-301c973ea953" satisfied condition "success or failure" Mar 18 13:57:27.006: INFO: Trying to get logs from node iruya-worker pod pod-secrets-d3306a25-3457-40bd-8625-301c973ea953 container secret-volume-test: STEP: delete the pod Mar 18 13:57:27.036: INFO: Waiting for pod pod-secrets-d3306a25-3457-40bd-8625-301c973ea953 to disappear Mar 18 13:57:27.048: INFO: Pod pod-secrets-d3306a25-3457-40bd-8625-301c973ea953 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:57:27.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1406" for this suite. Mar 18 13:57:33.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:57:33.137: INFO: namespace secrets-1406 deletion completed in 6.085418782s • [SLOW TEST:10.274 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:57:33.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9460.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9460.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9460.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9460.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 13:57:39.241: INFO: DNS probes using dns-test-3b914458-45f1-4e05-93bb-ba3d47617aaa succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9460.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9460.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9460.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9460.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 13:57:45.364: INFO: File wheezy_udp@dns-test-service-3.dns-9460.svc.cluster.local from pod dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 13:57:45.367: INFO: File jessie_udp@dns-test-service-3.dns-9460.svc.cluster.local from pod dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 13:57:45.367: INFO: Lookups using dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 failed for: [wheezy_udp@dns-test-service-3.dns-9460.svc.cluster.local jessie_udp@dns-test-service-3.dns-9460.svc.cluster.local] Mar 18 13:57:50.373: INFO: File wheezy_udp@dns-test-service-3.dns-9460.svc.cluster.local from pod dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 13:57:50.377: INFO: File jessie_udp@dns-test-service-3.dns-9460.svc.cluster.local from pod dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 13:57:50.377: INFO: Lookups using dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 failed for: [wheezy_udp@dns-test-service-3.dns-9460.svc.cluster.local jessie_udp@dns-test-service-3.dns-9460.svc.cluster.local] Mar 18 13:57:55.372: INFO: File wheezy_udp@dns-test-service-3.dns-9460.svc.cluster.local from pod dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 13:57:55.375: INFO: File jessie_udp@dns-test-service-3.dns-9460.svc.cluster.local from pod dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 13:57:55.375: INFO: Lookups using dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 failed for: [wheezy_udp@dns-test-service-3.dns-9460.svc.cluster.local jessie_udp@dns-test-service-3.dns-9460.svc.cluster.local] Mar 18 13:58:00.372: INFO: File wheezy_udp@dns-test-service-3.dns-9460.svc.cluster.local from pod dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 13:58:00.376: INFO: File jessie_udp@dns-test-service-3.dns-9460.svc.cluster.local from pod dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 13:58:00.376: INFO: Lookups using dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 failed for: [wheezy_udp@dns-test-service-3.dns-9460.svc.cluster.local jessie_udp@dns-test-service-3.dns-9460.svc.cluster.local] Mar 18 13:58:05.372: INFO: File wheezy_udp@dns-test-service-3.dns-9460.svc.cluster.local from pod dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 13:58:05.376: INFO: File jessie_udp@dns-test-service-3.dns-9460.svc.cluster.local from pod dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 18 13:58:05.376: INFO: Lookups using dns-9460/dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 failed for: [wheezy_udp@dns-test-service-3.dns-9460.svc.cluster.local jessie_udp@dns-test-service-3.dns-9460.svc.cluster.local] Mar 18 13:58:10.376: INFO: DNS probes using dns-test-9c5c1647-e1c9-459c-8ce9-99db2026eb34 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9460.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9460.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9460.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9460.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 13:58:16.947: INFO: DNS probes using dns-test-fab02985-c375-47ea-ba38-65b4925e6759 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:58:17.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9460" for this suite. Mar 18 13:58:23.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:58:23.123: INFO: namespace dns-9460 deletion completed in 6.096973624s • [SLOW TEST:49.985 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:58:23.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 18 13:58:23.165: INFO: Waiting up to 5m0s for pod "pod-793a030f-9103-4c61-9503-d33904d0bd3a" in namespace "emptydir-8350" to be "success or failure" Mar 18 13:58:23.203: INFO: Pod "pod-793a030f-9103-4c61-9503-d33904d0bd3a": Phase="Pending", Reason="", readiness=false. Elapsed: 37.422094ms Mar 18 13:58:25.207: INFO: Pod "pod-793a030f-9103-4c61-9503-d33904d0bd3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041197277s Mar 18 13:58:27.211: INFO: Pod "pod-793a030f-9103-4c61-9503-d33904d0bd3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045799314s STEP: Saw pod success Mar 18 13:58:27.211: INFO: Pod "pod-793a030f-9103-4c61-9503-d33904d0bd3a" satisfied condition "success or failure" Mar 18 13:58:27.214: INFO: Trying to get logs from node iruya-worker pod pod-793a030f-9103-4c61-9503-d33904d0bd3a container test-container: STEP: delete the pod Mar 18 13:58:27.248: INFO: Waiting for pod pod-793a030f-9103-4c61-9503-d33904d0bd3a to disappear Mar 18 13:58:27.253: INFO: Pod pod-793a030f-9103-4c61-9503-d33904d0bd3a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:58:27.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8350" for this suite. Mar 18 13:58:33.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:58:33.365: INFO: namespace emptydir-8350 deletion completed in 6.108449187s • [SLOW TEST:10.242 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:58:33.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Mar 18 13:58:33.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 18 13:58:33.570: INFO: stderr: "" Mar 18 13:58:33.570: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:58:33.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-988" for this suite. Mar 18 13:58:39.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:58:39.686: INFO: namespace kubectl-988 deletion completed in 6.111567395s • [SLOW TEST:6.321 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:58:39.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-9715 I0318 13:58:39.743778 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9715, replica count: 1 I0318 13:58:40.794400 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 13:58:41.794666 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0318 13:58:42.794897 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 18 13:58:42.967: INFO: Created: latency-svc-66t6q Mar 18 13:58:42.968: INFO: Got endpoints: latency-svc-66t6q [73.541011ms] Mar 18 13:58:42.998: INFO: Created: latency-svc-ccsbl Mar 18 13:58:43.013: INFO: Got endpoints: latency-svc-ccsbl [44.945451ms] Mar 18 13:58:43.034: INFO: Created: latency-svc-rhsjp Mar 18 13:58:43.049: INFO: Got endpoints: latency-svc-rhsjp [81.03427ms] Mar 18 13:58:43.119: INFO: Created: latency-svc-bxc4x Mar 18 13:58:43.122: INFO: Got endpoints: latency-svc-bxc4x [153.999512ms] Mar 18 13:58:43.149: INFO: Created: latency-svc-nc4d9 Mar 18 13:58:43.165: INFO: Got endpoints: latency-svc-nc4d9 [196.987593ms] Mar 18 13:58:43.186: INFO: Created: latency-svc-94f2n Mar 18 13:58:43.201: INFO: Got endpoints: latency-svc-94f2n [232.877122ms] Mar 18 13:58:43.250: INFO: Created: latency-svc-s8lft Mar 18 13:58:43.255: INFO: Got endpoints: latency-svc-s8lft [286.596081ms] Mar 18 13:58:43.274: INFO: Created: latency-svc-hlgrs Mar 18 13:58:43.286: INFO: Got endpoints: latency-svc-hlgrs [317.278766ms] Mar 18 13:58:43.310: INFO: Created: latency-svc-gsqh6 Mar 18 13:58:43.326: INFO: Got endpoints: latency-svc-gsqh6 [71.589888ms] Mar 18 13:58:43.346: INFO: Created: latency-svc-gqkg4 Mar 18 13:58:43.388: INFO: Got endpoints: latency-svc-gqkg4 [419.499003ms] Mar 18 13:58:43.394: INFO: Created: latency-svc-49cb4 Mar 18 13:58:43.411: INFO: Got endpoints: latency-svc-49cb4 [442.547202ms] Mar 18 13:58:43.431: INFO: Created: latency-svc-nnl6z Mar 18 13:58:43.448: INFO: Got endpoints: latency-svc-nnl6z [480.05522ms] Mar 18 13:58:43.474: INFO: Created: latency-svc-2t7jj Mar 18 13:58:43.483: INFO: Got endpoints: latency-svc-2t7jj [515.009219ms] Mar 18 13:58:43.539: INFO: Created: latency-svc-jj7nx Mar 18 13:58:43.562: INFO: Created: latency-svc-kvngc Mar 18 13:58:43.562: INFO: Got endpoints: latency-svc-jj7nx [593.743036ms] Mar 18 13:58:43.586: INFO: Got endpoints: latency-svc-kvngc [617.288122ms] Mar 18 13:58:43.618: INFO: Created: latency-svc-6fm9p Mar 18 13:58:43.630: INFO: Got endpoints: latency-svc-6fm9p [661.880973ms] Mar 18 13:58:43.670: INFO: Created: latency-svc-8v78q Mar 18 13:58:43.689: INFO: Got endpoints: latency-svc-8v78q [720.38636ms] Mar 18 13:58:43.689: INFO: Created: latency-svc-6tgvx Mar 18 13:58:43.712: INFO: Got endpoints: latency-svc-6tgvx [699.269852ms] Mar 18 13:58:43.742: INFO: Created: latency-svc-xv7jv Mar 18 13:58:43.756: INFO: Got endpoints: latency-svc-xv7jv [706.495477ms] Mar 18 13:58:43.808: INFO: Created: latency-svc-2dddr Mar 18 13:58:43.811: INFO: Got endpoints: latency-svc-2dddr [688.955882ms] Mar 18 13:58:43.864: INFO: Created: latency-svc-dnzl9 Mar 18 13:58:43.876: INFO: Got endpoints: latency-svc-dnzl9 [710.341542ms] Mar 18 13:58:43.899: INFO: Created: latency-svc-67lrh Mar 18 13:58:43.933: INFO: Got endpoints: latency-svc-67lrh [731.607854ms] Mar 18 13:58:43.941: INFO: Created: latency-svc-qw29v Mar 18 13:58:43.955: INFO: Got endpoints: latency-svc-qw29v [669.238789ms] Mar 18 13:58:43.993: INFO: Created: latency-svc-phtfc Mar 18 13:58:44.012: INFO: Got endpoints: latency-svc-phtfc [684.969719ms] Mar 18 13:58:44.085: INFO: Created: latency-svc-jtw7c Mar 18 13:58:44.099: INFO: Got endpoints: latency-svc-jtw7c [710.709538ms] Mar 18 13:58:44.127: INFO: Created: latency-svc-2z7s9 Mar 18 13:58:44.147: INFO: Got endpoints: latency-svc-2z7s9 [735.854911ms] Mar 18 13:58:44.175: INFO: Created: latency-svc-7j988 Mar 18 13:58:44.226: INFO: Got endpoints: latency-svc-7j988 [777.628687ms] Mar 18 13:58:44.245: INFO: Created: latency-svc-82pld Mar 18 13:58:44.275: INFO: Got endpoints: latency-svc-82pld [791.712601ms] Mar 18 13:58:44.325: INFO: Created: latency-svc-rch9f Mar 18 13:58:44.436: INFO: Got endpoints: latency-svc-rch9f [873.587977ms] Mar 18 13:58:44.440: INFO: Created: latency-svc-7q9l7 Mar 18 13:58:44.448: INFO: Got endpoints: latency-svc-7q9l7 [861.77431ms] Mar 18 13:58:44.491: INFO: Created: latency-svc-g6w98 Mar 18 13:58:44.508: INFO: Got endpoints: latency-svc-g6w98 [877.515497ms] Mar 18 13:58:44.574: INFO: Created: latency-svc-6ws7n Mar 18 13:58:44.578: INFO: Got endpoints: latency-svc-6ws7n [889.068752ms] Mar 18 13:58:44.613: INFO: Created: latency-svc-692hg Mar 18 13:58:44.635: INFO: Got endpoints: latency-svc-692hg [922.243218ms] Mar 18 13:58:44.655: INFO: Created: latency-svc-84l7n Mar 18 13:58:44.665: INFO: Got endpoints: latency-svc-84l7n [908.896646ms] Mar 18 13:58:44.711: INFO: Created: latency-svc-4cpdx Mar 18 13:58:44.720: INFO: Got endpoints: latency-svc-4cpdx [908.197246ms] Mar 18 13:58:44.743: INFO: Created: latency-svc-rfs7j Mar 18 13:58:44.756: INFO: Got endpoints: latency-svc-rfs7j [879.764493ms] Mar 18 13:58:44.785: INFO: Created: latency-svc-6xl98 Mar 18 13:58:44.831: INFO: Got endpoints: latency-svc-6xl98 [898.296924ms] Mar 18 13:58:44.840: INFO: Created: latency-svc-lhrrs Mar 18 13:58:44.852: INFO: Got endpoints: latency-svc-lhrrs [896.939755ms] Mar 18 13:58:44.871: INFO: Created: latency-svc-54jmd Mar 18 13:58:44.882: INFO: Got endpoints: latency-svc-54jmd [870.796479ms] Mar 18 13:58:44.900: INFO: Created: latency-svc-76bjn Mar 18 13:58:44.913: INFO: Got endpoints: latency-svc-76bjn [814.052972ms] Mar 18 13:58:44.969: INFO: Created: latency-svc-b6gbc Mar 18 13:58:44.971: INFO: Got endpoints: latency-svc-b6gbc [824.360849ms] Mar 18 13:58:45.001: INFO: Created: latency-svc-9tp8w Mar 18 13:58:45.015: INFO: Got endpoints: latency-svc-9tp8w [788.861503ms] Mar 18 13:58:45.043: INFO: Created: latency-svc-w8rtd Mar 18 13:58:45.058: INFO: Got endpoints: latency-svc-w8rtd [782.310792ms] Mar 18 13:58:45.107: INFO: Created: latency-svc-fz5fl Mar 18 13:58:45.124: INFO: Got endpoints: latency-svc-fz5fl [687.71921ms] Mar 18 13:58:45.153: INFO: Created: latency-svc-hppft Mar 18 13:58:45.166: INFO: Got endpoints: latency-svc-hppft [718.197912ms] Mar 18 13:58:45.187: INFO: Created: latency-svc-xm2mw Mar 18 13:58:45.287: INFO: Got endpoints: latency-svc-xm2mw [779.022945ms] Mar 18 13:58:45.289: INFO: Created: latency-svc-wdtwj Mar 18 13:58:45.315: INFO: Got endpoints: latency-svc-wdtwj [736.855376ms] Mar 18 13:58:45.363: INFO: Created: latency-svc-ftdhl Mar 18 13:58:45.383: INFO: Got endpoints: latency-svc-ftdhl [747.817345ms] Mar 18 13:58:45.430: INFO: Created: latency-svc-mf8jh Mar 18 13:58:45.438: INFO: Got endpoints: latency-svc-mf8jh [773.134685ms] Mar 18 13:58:45.457: INFO: Created: latency-svc-4pn9z Mar 18 13:58:45.473: INFO: Got endpoints: latency-svc-4pn9z [753.675171ms] Mar 18 13:58:45.494: INFO: Created: latency-svc-wh9xl Mar 18 13:58:45.511: INFO: Got endpoints: latency-svc-wh9xl [755.377735ms] Mar 18 13:58:45.529: INFO: Created: latency-svc-64b9p Mar 18 13:58:45.586: INFO: Got endpoints: latency-svc-64b9p [754.249815ms] Mar 18 13:58:45.588: INFO: Created: latency-svc-dfx54 Mar 18 13:58:45.594: INFO: Got endpoints: latency-svc-dfx54 [742.239043ms] Mar 18 13:58:45.615: INFO: Created: latency-svc-8l46h Mar 18 13:58:45.625: INFO: Got endpoints: latency-svc-8l46h [742.428095ms] Mar 18 13:58:45.643: INFO: Created: latency-svc-kk57w Mar 18 13:58:45.655: INFO: Got endpoints: latency-svc-kk57w [742.005203ms] Mar 18 13:58:45.673: INFO: Created: latency-svc-sp5pp Mar 18 13:58:45.730: INFO: Got endpoints: latency-svc-sp5pp [758.224854ms] Mar 18 13:58:45.731: INFO: Created: latency-svc-tdfrl Mar 18 13:58:45.739: INFO: Got endpoints: latency-svc-tdfrl [724.093495ms] Mar 18 13:58:45.765: INFO: Created: latency-svc-5mkwh Mar 18 13:58:45.782: INFO: Got endpoints: latency-svc-5mkwh [724.087225ms] Mar 18 13:58:45.800: INFO: Created: latency-svc-lh6bf Mar 18 13:58:45.812: INFO: Got endpoints: latency-svc-lh6bf [688.359377ms] Mar 18 13:58:45.879: INFO: Created: latency-svc-vkvbx Mar 18 13:58:45.882: INFO: Got endpoints: latency-svc-vkvbx [716.222125ms] Mar 18 13:58:45.913: INFO: Created: latency-svc-h5xg2 Mar 18 13:58:45.927: INFO: Got endpoints: latency-svc-h5xg2 [639.716829ms] Mar 18 13:58:45.950: INFO: Created: latency-svc-rjjdl Mar 18 13:58:45.963: INFO: Got endpoints: latency-svc-rjjdl [647.846307ms] Mar 18 13:58:46.017: INFO: Created: latency-svc-kb98p Mar 18 13:58:46.020: INFO: Got endpoints: latency-svc-kb98p [637.018367ms] Mar 18 13:58:46.064: INFO: Created: latency-svc-z6mcg Mar 18 13:58:46.090: INFO: Got endpoints: latency-svc-z6mcg [651.659867ms] Mar 18 13:58:46.173: INFO: Created: latency-svc-xtqhl Mar 18 13:58:46.176: INFO: Got endpoints: latency-svc-xtqhl [702.502442ms] Mar 18 13:58:46.207: INFO: Created: latency-svc-9k7nj Mar 18 13:58:46.222: INFO: Got endpoints: latency-svc-9k7nj [711.286227ms] Mar 18 13:58:46.257: INFO: Created: latency-svc-vm4pd Mar 18 13:58:46.340: INFO: Got endpoints: latency-svc-vm4pd [753.873091ms] Mar 18 13:58:46.342: INFO: Created: latency-svc-m6622 Mar 18 13:58:46.349: INFO: Got endpoints: latency-svc-m6622 [754.327696ms] Mar 18 13:58:46.382: INFO: Created: latency-svc-762ft Mar 18 13:58:46.403: INFO: Got endpoints: latency-svc-762ft [778.2997ms] Mar 18 13:58:46.430: INFO: Created: latency-svc-2k49v Mar 18 13:58:46.466: INFO: Got endpoints: latency-svc-2k49v [810.730636ms] Mar 18 13:58:46.503: INFO: Created: latency-svc-nxvhc Mar 18 13:58:46.517: INFO: Got endpoints: latency-svc-nxvhc [787.625775ms] Mar 18 13:58:46.544: INFO: Created: latency-svc-8tvkm Mar 18 13:58:46.610: INFO: Got endpoints: latency-svc-8tvkm [870.379344ms] Mar 18 13:58:46.611: INFO: Created: latency-svc-mkzvn Mar 18 13:58:46.620: INFO: Got endpoints: latency-svc-mkzvn [837.760849ms] Mar 18 13:58:46.651: INFO: Created: latency-svc-zhzr8 Mar 18 13:58:46.668: INFO: Got endpoints: latency-svc-zhzr8 [855.670913ms] Mar 18 13:58:46.688: INFO: Created: latency-svc-pzdsp Mar 18 13:58:46.706: INFO: Got endpoints: latency-svc-pzdsp [823.653502ms] Mar 18 13:58:46.754: INFO: Created: latency-svc-f9rvm Mar 18 13:58:46.794: INFO: Got endpoints: latency-svc-f9rvm [867.422701ms] Mar 18 13:58:46.795: INFO: Created: latency-svc-9hrtq Mar 18 13:58:46.807: INFO: Got endpoints: latency-svc-9hrtq [844.243787ms] Mar 18 13:58:46.825: INFO: Created: latency-svc-b4p48 Mar 18 13:58:46.837: INFO: Got endpoints: latency-svc-b4p48 [817.413689ms] Mar 18 13:58:46.904: INFO: Created: latency-svc-f8nwb Mar 18 13:58:46.928: INFO: Got endpoints: latency-svc-f8nwb [837.884125ms] Mar 18 13:58:46.928: INFO: Created: latency-svc-8jtxm Mar 18 13:58:46.940: INFO: Got endpoints: latency-svc-8jtxm [763.821586ms] Mar 18 13:58:46.957: INFO: Created: latency-svc-q5p6m Mar 18 13:58:46.970: INFO: Got endpoints: latency-svc-q5p6m [747.784753ms] Mar 18 13:58:46.993: INFO: Created: latency-svc-xwb6m Mar 18 13:58:47.029: INFO: Got endpoints: latency-svc-xwb6m [689.266078ms] Mar 18 13:58:47.041: INFO: Created: latency-svc-wfplf Mar 18 13:58:47.072: INFO: Got endpoints: latency-svc-wfplf [722.840824ms] Mar 18 13:58:47.103: INFO: Created: latency-svc-bkxjj Mar 18 13:58:47.115: INFO: Got endpoints: latency-svc-bkxjj [711.717158ms] Mar 18 13:58:47.173: INFO: Created: latency-svc-htchr Mar 18 13:58:47.176: INFO: Got endpoints: latency-svc-htchr [710.330107ms] Mar 18 13:58:47.203: INFO: Created: latency-svc-v4nlp Mar 18 13:58:47.218: INFO: Got endpoints: latency-svc-v4nlp [700.331901ms] Mar 18 13:58:47.239: INFO: Created: latency-svc-dhgs7 Mar 18 13:58:47.268: INFO: Got endpoints: latency-svc-dhgs7 [658.587876ms] Mar 18 13:58:47.340: INFO: Created: latency-svc-tqbvs Mar 18 13:58:47.351: INFO: Got endpoints: latency-svc-tqbvs [731.358389ms] Mar 18 13:58:47.378: INFO: Created: latency-svc-d65md Mar 18 13:58:47.393: INFO: Got endpoints: latency-svc-d65md [725.315908ms] Mar 18 13:58:47.414: INFO: Created: latency-svc-mwlm4 Mar 18 13:58:47.430: INFO: Got endpoints: latency-svc-mwlm4 [723.767863ms] Mar 18 13:58:47.478: INFO: Created: latency-svc-z78hd Mar 18 13:58:47.481: INFO: Got endpoints: latency-svc-z78hd [686.485471ms] Mar 18 13:58:47.528: INFO: Created: latency-svc-2xdq4 Mar 18 13:58:47.544: INFO: Got endpoints: latency-svc-2xdq4 [736.931742ms] Mar 18 13:58:47.564: INFO: Created: latency-svc-wm874 Mar 18 13:58:47.628: INFO: Got endpoints: latency-svc-wm874 [790.456053ms] Mar 18 13:58:47.630: INFO: Created: latency-svc-6ffq5 Mar 18 13:58:47.634: INFO: Got endpoints: latency-svc-6ffq5 [706.567124ms] Mar 18 13:58:47.659: INFO: Created: latency-svc-dldhp Mar 18 13:58:47.682: INFO: Got endpoints: latency-svc-dldhp [742.512933ms] Mar 18 13:58:47.706: INFO: Created: latency-svc-wlzfm Mar 18 13:58:47.719: INFO: Got endpoints: latency-svc-wlzfm [748.673771ms] Mar 18 13:58:47.766: INFO: Created: latency-svc-9fr2r Mar 18 13:58:47.770: INFO: Got endpoints: latency-svc-9fr2r [740.921396ms] Mar 18 13:58:47.792: INFO: Created: latency-svc-7bbq8 Mar 18 13:58:47.804: INFO: Got endpoints: latency-svc-7bbq8 [732.348543ms] Mar 18 13:58:47.822: INFO: Created: latency-svc-4zx5b Mar 18 13:58:47.846: INFO: Got endpoints: latency-svc-4zx5b [730.554596ms] Mar 18 13:58:47.903: INFO: Created: latency-svc-xb6h9 Mar 18 13:58:47.906: INFO: Got endpoints: latency-svc-xb6h9 [730.447336ms] Mar 18 13:58:47.935: INFO: Created: latency-svc-j6l78 Mar 18 13:58:47.943: INFO: Got endpoints: latency-svc-j6l78 [725.134057ms] Mar 18 13:58:47.964: INFO: Created: latency-svc-jlwzw Mar 18 13:58:47.984: INFO: Got endpoints: latency-svc-jlwzw [715.163855ms] Mar 18 13:58:48.054: INFO: Created: latency-svc-lp5cx Mar 18 13:58:48.063: INFO: Got endpoints: latency-svc-lp5cx [712.108683ms] Mar 18 13:58:48.086: INFO: Created: latency-svc-ztzsx Mar 18 13:58:48.112: INFO: Got endpoints: latency-svc-ztzsx [718.250702ms] Mar 18 13:58:48.145: INFO: Created: latency-svc-t9hqb Mar 18 13:58:48.196: INFO: Got endpoints: latency-svc-t9hqb [766.541387ms] Mar 18 13:58:48.217: INFO: Created: latency-svc-rvt9b Mar 18 13:58:48.233: INFO: Got endpoints: latency-svc-rvt9b [752.275922ms] Mar 18 13:58:48.254: INFO: Created: latency-svc-sms25 Mar 18 13:58:48.268: INFO: Got endpoints: latency-svc-sms25 [724.180304ms] Mar 18 13:58:48.290: INFO: Created: latency-svc-xxt2h Mar 18 13:58:48.316: INFO: Got endpoints: latency-svc-xxt2h [688.363415ms] Mar 18 13:58:48.338: INFO: Created: latency-svc-k27l8 Mar 18 13:58:48.373: INFO: Got endpoints: latency-svc-k27l8 [738.282048ms] Mar 18 13:58:48.409: INFO: Created: latency-svc-p6gqb Mar 18 13:58:48.442: INFO: Got endpoints: latency-svc-p6gqb [759.492958ms] Mar 18 13:58:48.452: INFO: Created: latency-svc-bz7v2 Mar 18 13:58:48.468: INFO: Got endpoints: latency-svc-bz7v2 [748.50974ms] Mar 18 13:58:48.530: INFO: Created: latency-svc-5mkgk Mar 18 13:58:48.610: INFO: Got endpoints: latency-svc-5mkgk [839.719468ms] Mar 18 13:58:48.611: INFO: Created: latency-svc-mx872 Mar 18 13:58:48.624: INFO: Got endpoints: latency-svc-mx872 [819.628134ms] Mar 18 13:58:48.649: INFO: Created: latency-svc-qnt85 Mar 18 13:58:48.661: INFO: Got endpoints: latency-svc-qnt85 [814.712919ms] Mar 18 13:58:48.680: INFO: Created: latency-svc-crr9g Mar 18 13:58:48.697: INFO: Got endpoints: latency-svc-crr9g [790.61997ms] Mar 18 13:58:48.755: INFO: Created: latency-svc-bwvl2 Mar 18 13:58:48.757: INFO: Got endpoints: latency-svc-bwvl2 [814.303261ms] Mar 18 13:58:48.788: INFO: Created: latency-svc-v5c89 Mar 18 13:58:48.805: INFO: Got endpoints: latency-svc-v5c89 [821.713955ms] Mar 18 13:58:48.835: INFO: Created: latency-svc-cm9qm Mar 18 13:58:48.848: INFO: Got endpoints: latency-svc-cm9qm [784.532614ms] Mar 18 13:58:48.911: INFO: Created: latency-svc-b6mcn Mar 18 13:58:48.931: INFO: Got endpoints: latency-svc-b6mcn [819.524425ms] Mar 18 13:58:48.962: INFO: Created: latency-svc-gxfr7 Mar 18 13:58:48.975: INFO: Got endpoints: latency-svc-gxfr7 [778.33614ms] Mar 18 13:58:48.991: INFO: Created: latency-svc-j8lwx Mar 18 13:58:49.041: INFO: Got endpoints: latency-svc-j8lwx [807.761628ms] Mar 18 13:58:49.056: INFO: Created: latency-svc-xnmrk Mar 18 13:58:49.070: INFO: Got endpoints: latency-svc-xnmrk [801.998774ms] Mar 18 13:58:49.105: INFO: Created: latency-svc-k9qz4 Mar 18 13:58:49.123: INFO: Got endpoints: latency-svc-k9qz4 [807.105408ms] Mar 18 13:58:49.172: INFO: Created: latency-svc-mlzls Mar 18 13:58:49.214: INFO: Got endpoints: latency-svc-mlzls [840.828647ms] Mar 18 13:58:49.214: INFO: Created: latency-svc-8xvbm Mar 18 13:58:49.227: INFO: Got endpoints: latency-svc-8xvbm [785.448287ms] Mar 18 13:58:49.248: INFO: Created: latency-svc-llmvf Mar 18 13:58:49.264: INFO: Got endpoints: latency-svc-llmvf [796.2261ms] Mar 18 13:58:49.334: INFO: Created: latency-svc-djbjv Mar 18 13:58:49.337: INFO: Got endpoints: latency-svc-djbjv [727.274007ms] Mar 18 13:58:49.387: INFO: Created: latency-svc-h9lgr Mar 18 13:58:49.402: INFO: Got endpoints: latency-svc-h9lgr [778.214305ms] Mar 18 13:58:49.424: INFO: Created: latency-svc-94m7j Mar 18 13:58:49.483: INFO: Got endpoints: latency-svc-94m7j [822.794775ms] Mar 18 13:58:49.486: INFO: Created: latency-svc-l9fdl Mar 18 13:58:49.493: INFO: Got endpoints: latency-svc-l9fdl [795.663744ms] Mar 18 13:58:49.512: INFO: Created: latency-svc-t4l5n Mar 18 13:58:49.523: INFO: Got endpoints: latency-svc-t4l5n [765.7143ms] Mar 18 13:58:49.542: INFO: Created: latency-svc-7m4cg Mar 18 13:58:49.553: INFO: Got endpoints: latency-svc-7m4cg [747.80662ms] Mar 18 13:58:49.574: INFO: Created: latency-svc-j5nqm Mar 18 13:58:49.609: INFO: Got endpoints: latency-svc-j5nqm [761.48613ms] Mar 18 13:58:49.622: INFO: Created: latency-svc-8jnxg Mar 18 13:58:49.639: INFO: Got endpoints: latency-svc-8jnxg [707.307664ms] Mar 18 13:58:49.664: INFO: Created: latency-svc-rkrz5 Mar 18 13:58:49.680: INFO: Got endpoints: latency-svc-rkrz5 [705.521329ms] Mar 18 13:58:49.742: INFO: Created: latency-svc-gct88 Mar 18 13:58:49.744: INFO: Got endpoints: latency-svc-gct88 [703.116838ms] Mar 18 13:58:49.765: INFO: Created: latency-svc-fjwnz Mar 18 13:58:49.777: INFO: Got endpoints: latency-svc-fjwnz [706.853855ms] Mar 18 13:58:49.795: INFO: Created: latency-svc-zw5db Mar 18 13:58:49.831: INFO: Got endpoints: latency-svc-zw5db [707.865457ms] Mar 18 13:58:49.940: INFO: Created: latency-svc-zjtng Mar 18 13:58:49.949: INFO: Got endpoints: latency-svc-zjtng [735.414777ms] Mar 18 13:58:49.970: INFO: Created: latency-svc-xwzqq Mar 18 13:58:49.979: INFO: Got endpoints: latency-svc-xwzqq [752.003936ms] Mar 18 13:58:50.041: INFO: Created: latency-svc-56s85 Mar 18 13:58:50.043: INFO: Got endpoints: latency-svc-56s85 [779.653122ms] Mar 18 13:58:50.070: INFO: Created: latency-svc-f8csw Mar 18 13:58:50.106: INFO: Got endpoints: latency-svc-f8csw [768.969152ms] Mar 18 13:58:50.173: INFO: Created: latency-svc-xvrwd Mar 18 13:58:50.184: INFO: Got endpoints: latency-svc-xvrwd [782.031951ms] Mar 18 13:58:50.210: INFO: Created: latency-svc-xfz8d Mar 18 13:58:50.232: INFO: Got endpoints: latency-svc-xfz8d [748.887725ms] Mar 18 13:58:50.258: INFO: Created: latency-svc-c2sj7 Mar 18 13:58:50.334: INFO: Got endpoints: latency-svc-c2sj7 [841.013394ms] Mar 18 13:58:50.336: INFO: Created: latency-svc-mnsxm Mar 18 13:58:50.341: INFO: Got endpoints: latency-svc-mnsxm [817.731325ms] Mar 18 13:58:50.359: INFO: Created: latency-svc-zgfwz Mar 18 13:58:50.377: INFO: Got endpoints: latency-svc-zgfwz [824.148698ms] Mar 18 13:58:50.395: INFO: Created: latency-svc-lzzk8 Mar 18 13:58:50.407: INFO: Got endpoints: latency-svc-lzzk8 [797.877633ms] Mar 18 13:58:50.562: INFO: Created: latency-svc-frhgx Mar 18 13:58:50.568: INFO: Got endpoints: latency-svc-frhgx [928.942241ms] Mar 18 13:58:50.629: INFO: Created: latency-svc-ds7zg Mar 18 13:58:50.642: INFO: Got endpoints: latency-svc-ds7zg [961.609486ms] Mar 18 13:58:50.688: INFO: Created: latency-svc-x7z2m Mar 18 13:58:50.690: INFO: Got endpoints: latency-svc-x7z2m [945.516073ms] Mar 18 13:58:50.725: INFO: Created: latency-svc-mghj2 Mar 18 13:58:50.761: INFO: Got endpoints: latency-svc-mghj2 [984.13546ms] Mar 18 13:58:50.820: INFO: Created: latency-svc-5h2dq Mar 18 13:58:50.823: INFO: Got endpoints: latency-svc-5h2dq [991.420842ms] Mar 18 13:58:50.850: INFO: Created: latency-svc-rxwpp Mar 18 13:58:50.871: INFO: Got endpoints: latency-svc-rxwpp [921.735626ms] Mar 18 13:58:50.951: INFO: Created: latency-svc-nn27m Mar 18 13:58:50.954: INFO: Got endpoints: latency-svc-nn27m [974.355893ms] Mar 18 13:58:51.002: INFO: Created: latency-svc-j8f89 Mar 18 13:58:51.015: INFO: Got endpoints: latency-svc-j8f89 [971.175805ms] Mar 18 13:58:51.037: INFO: Created: latency-svc-bwxgd Mar 18 13:58:51.076: INFO: Got endpoints: latency-svc-bwxgd [970.368784ms] Mar 18 13:58:51.090: INFO: Created: latency-svc-qd2ct Mar 18 13:58:51.106: INFO: Got endpoints: latency-svc-qd2ct [921.659391ms] Mar 18 13:58:51.133: INFO: Created: latency-svc-x7fps Mar 18 13:58:51.150: INFO: Got endpoints: latency-svc-x7fps [917.150921ms] Mar 18 13:58:51.177: INFO: Created: latency-svc-f4pck Mar 18 13:58:51.215: INFO: Got endpoints: latency-svc-f4pck [880.597358ms] Mar 18 13:58:51.235: INFO: Created: latency-svc-wbvg4 Mar 18 13:58:51.244: INFO: Got endpoints: latency-svc-wbvg4 [903.527186ms] Mar 18 13:58:51.265: INFO: Created: latency-svc-zmm4g Mar 18 13:58:51.275: INFO: Got endpoints: latency-svc-zmm4g [897.361026ms] Mar 18 13:58:51.312: INFO: Created: latency-svc-52wjl Mar 18 13:58:51.352: INFO: Got endpoints: latency-svc-52wjl [944.741841ms] Mar 18 13:58:51.378: INFO: Created: latency-svc-dzxwb Mar 18 13:58:51.401: INFO: Got endpoints: latency-svc-dzxwb [833.740986ms] Mar 18 13:58:51.421: INFO: Created: latency-svc-zqm85 Mar 18 13:58:51.437: INFO: Got endpoints: latency-svc-zqm85 [795.174449ms] Mar 18 13:58:51.484: INFO: Created: latency-svc-p4fsr Mar 18 13:58:51.487: INFO: Got endpoints: latency-svc-p4fsr [797.112108ms] Mar 18 13:58:51.511: INFO: Created: latency-svc-5v2s5 Mar 18 13:58:51.533: INFO: Got endpoints: latency-svc-5v2s5 [771.872708ms] Mar 18 13:58:51.564: INFO: Created: latency-svc-2zbr2 Mar 18 13:58:51.576: INFO: Got endpoints: latency-svc-2zbr2 [753.014669ms] Mar 18 13:58:51.628: INFO: Created: latency-svc-dnw9l Mar 18 13:58:51.630: INFO: Got endpoints: latency-svc-dnw9l [759.235036ms] Mar 18 13:58:51.654: INFO: Created: latency-svc-npt4x Mar 18 13:58:51.667: INFO: Got endpoints: latency-svc-npt4x [712.725933ms] Mar 18 13:58:51.691: INFO: Created: latency-svc-ccsl5 Mar 18 13:58:51.715: INFO: Got endpoints: latency-svc-ccsl5 [699.876539ms] Mar 18 13:58:51.765: INFO: Created: latency-svc-x5bsm Mar 18 13:58:51.769: INFO: Got endpoints: latency-svc-x5bsm [692.461342ms] Mar 18 13:58:51.792: INFO: Created: latency-svc-fn5jl Mar 18 13:58:51.805: INFO: Got endpoints: latency-svc-fn5jl [699.563583ms] Mar 18 13:58:51.828: INFO: Created: latency-svc-w96xf Mar 18 13:58:51.842: INFO: Got endpoints: latency-svc-w96xf [691.993372ms] Mar 18 13:58:51.864: INFO: Created: latency-svc-qn9lg Mar 18 13:58:51.903: INFO: Got endpoints: latency-svc-qn9lg [687.908057ms] Mar 18 13:58:51.906: INFO: Created: latency-svc-gqfpl Mar 18 13:58:51.920: INFO: Got endpoints: latency-svc-gqfpl [675.774719ms] Mar 18 13:58:51.943: INFO: Created: latency-svc-pwxrk Mar 18 13:58:51.957: INFO: Got endpoints: latency-svc-pwxrk [681.670572ms] Mar 18 13:58:51.978: INFO: Created: latency-svc-bvlqw Mar 18 13:58:51.993: INFO: Got endpoints: latency-svc-bvlqw [640.868373ms] Mar 18 13:58:52.041: INFO: Created: latency-svc-mmhr2 Mar 18 13:58:52.044: INFO: Got endpoints: latency-svc-mmhr2 [642.350776ms] Mar 18 13:58:52.068: INFO: Created: latency-svc-25hlh Mar 18 13:58:52.084: INFO: Got endpoints: latency-svc-25hlh [646.782487ms] Mar 18 13:58:52.117: INFO: Created: latency-svc-khhjq Mar 18 13:58:52.132: INFO: Got endpoints: latency-svc-khhjq [644.686314ms] Mar 18 13:58:52.178: INFO: Created: latency-svc-ltw66 Mar 18 13:58:52.181: INFO: Got endpoints: latency-svc-ltw66 [647.851605ms] Mar 18 13:58:52.207: INFO: Created: latency-svc-bj2j7 Mar 18 13:58:52.216: INFO: Got endpoints: latency-svc-bj2j7 [640.120679ms] Mar 18 13:58:52.242: INFO: Created: latency-svc-mpbj4 Mar 18 13:58:52.259: INFO: Got endpoints: latency-svc-mpbj4 [628.509468ms] Mar 18 13:58:52.316: INFO: Created: latency-svc-kbgjn Mar 18 13:58:52.319: INFO: Got endpoints: latency-svc-kbgjn [652.568737ms] Mar 18 13:58:52.356: INFO: Created: latency-svc-7lgzg Mar 18 13:58:52.373: INFO: Got endpoints: latency-svc-7lgzg [658.333223ms] Mar 18 13:58:52.399: INFO: Created: latency-svc-vkrvw Mar 18 13:58:52.415: INFO: Got endpoints: latency-svc-vkrvw [646.439601ms] Mar 18 13:58:52.498: INFO: Created: latency-svc-wqlvd Mar 18 13:58:52.498: INFO: Got endpoints: latency-svc-wqlvd [692.670685ms] Mar 18 13:58:52.548: INFO: Created: latency-svc-ltmzk Mar 18 13:58:52.566: INFO: Got endpoints: latency-svc-ltmzk [724.425743ms] Mar 18 13:58:52.584: INFO: Created: latency-svc-b4mk7 Mar 18 13:58:52.682: INFO: Got endpoints: latency-svc-b4mk7 [779.042272ms] Mar 18 13:58:52.684: INFO: Created: latency-svc-c9pck Mar 18 13:58:52.719: INFO: Got endpoints: latency-svc-c9pck [798.422055ms] Mar 18 13:58:52.740: INFO: Created: latency-svc-xddst Mar 18 13:58:52.759: INFO: Got endpoints: latency-svc-xddst [802.154105ms] Mar 18 13:58:52.775: INFO: Created: latency-svc-gnj25 Mar 18 13:58:52.855: INFO: Got endpoints: latency-svc-gnj25 [862.186611ms] Mar 18 13:58:52.857: INFO: Created: latency-svc-qx8nq Mar 18 13:58:52.867: INFO: Got endpoints: latency-svc-qx8nq [823.143084ms] Mar 18 13:58:52.891: INFO: Created: latency-svc-mfqw6 Mar 18 13:58:52.903: INFO: Got endpoints: latency-svc-mfqw6 [819.261212ms] Mar 18 13:58:52.921: INFO: Created: latency-svc-6gjd7 Mar 18 13:58:52.938: INFO: Got endpoints: latency-svc-6gjd7 [805.937019ms] Mar 18 13:58:52.987: INFO: Created: latency-svc-k7zr4 Mar 18 13:58:53.015: INFO: Got endpoints: latency-svc-k7zr4 [833.570174ms] Mar 18 13:58:53.015: INFO: Created: latency-svc-fncqf Mar 18 13:58:53.028: INFO: Got endpoints: latency-svc-fncqf [811.846334ms] Mar 18 13:58:53.052: INFO: Created: latency-svc-xgb98 Mar 18 13:58:53.131: INFO: Got endpoints: latency-svc-xgb98 [871.872917ms] Mar 18 13:58:53.142: INFO: Created: latency-svc-v2sfx Mar 18 13:58:53.155: INFO: Got endpoints: latency-svc-v2sfx [835.420025ms] Mar 18 13:58:53.179: INFO: Created: latency-svc-ch7x5 Mar 18 13:58:53.191: INFO: Got endpoints: latency-svc-ch7x5 [817.815182ms] Mar 18 13:58:53.191: INFO: Latencies: [44.945451ms 71.589888ms 81.03427ms 153.999512ms 196.987593ms 232.877122ms 286.596081ms 317.278766ms 419.499003ms 442.547202ms 480.05522ms 515.009219ms 593.743036ms 617.288122ms 628.509468ms 637.018367ms 639.716829ms 640.120679ms 640.868373ms 642.350776ms 644.686314ms 646.439601ms 646.782487ms 647.846307ms 647.851605ms 651.659867ms 652.568737ms 658.333223ms 658.587876ms 661.880973ms 669.238789ms 675.774719ms 681.670572ms 684.969719ms 686.485471ms 687.71921ms 687.908057ms 688.359377ms 688.363415ms 688.955882ms 689.266078ms 691.993372ms 692.461342ms 692.670685ms 699.269852ms 699.563583ms 699.876539ms 700.331901ms 702.502442ms 703.116838ms 705.521329ms 706.495477ms 706.567124ms 706.853855ms 707.307664ms 707.865457ms 710.330107ms 710.341542ms 710.709538ms 711.286227ms 711.717158ms 712.108683ms 712.725933ms 715.163855ms 716.222125ms 718.197912ms 718.250702ms 720.38636ms 722.840824ms 723.767863ms 724.087225ms 724.093495ms 724.180304ms 724.425743ms 725.134057ms 725.315908ms 727.274007ms 730.447336ms 730.554596ms 731.358389ms 731.607854ms 732.348543ms 735.414777ms 735.854911ms 736.855376ms 736.931742ms 738.282048ms 740.921396ms 742.005203ms 742.239043ms 742.428095ms 742.512933ms 747.784753ms 747.80662ms 747.817345ms 748.50974ms 748.673771ms 748.887725ms 752.003936ms 752.275922ms 753.014669ms 753.675171ms 753.873091ms 754.249815ms 754.327696ms 755.377735ms 758.224854ms 759.235036ms 759.492958ms 761.48613ms 763.821586ms 765.7143ms 766.541387ms 768.969152ms 771.872708ms 773.134685ms 777.628687ms 778.214305ms 778.2997ms 778.33614ms 779.022945ms 779.042272ms 779.653122ms 782.031951ms 782.310792ms 784.532614ms 785.448287ms 787.625775ms 788.861503ms 790.456053ms 790.61997ms 791.712601ms 795.174449ms 795.663744ms 796.2261ms 797.112108ms 797.877633ms 798.422055ms 801.998774ms 802.154105ms 805.937019ms 807.105408ms 807.761628ms 810.730636ms 811.846334ms 814.052972ms 814.303261ms 814.712919ms 817.413689ms 817.731325ms 817.815182ms 819.261212ms 819.524425ms 819.628134ms 821.713955ms 822.794775ms 823.143084ms 823.653502ms 824.148698ms 824.360849ms 833.570174ms 833.740986ms 835.420025ms 837.760849ms 837.884125ms 839.719468ms 840.828647ms 841.013394ms 844.243787ms 855.670913ms 861.77431ms 862.186611ms 867.422701ms 870.379344ms 870.796479ms 871.872917ms 873.587977ms 877.515497ms 879.764493ms 880.597358ms 889.068752ms 896.939755ms 897.361026ms 898.296924ms 903.527186ms 908.197246ms 908.896646ms 917.150921ms 921.659391ms 921.735626ms 922.243218ms 928.942241ms 944.741841ms 945.516073ms 961.609486ms 970.368784ms 971.175805ms 974.355893ms 984.13546ms 991.420842ms] Mar 18 13:58:53.191: INFO: 50 %ile: 753.014669ms Mar 18 13:58:53.191: INFO: 90 %ile: 889.068752ms Mar 18 13:58:53.191: INFO: 99 %ile: 984.13546ms Mar 18 13:58:53.191: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:58:53.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9715" for this suite. Mar 18 13:59:15.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:59:15.284: INFO: namespace svc-latency-9715 deletion completed in 22.08584817s • [SLOW TEST:35.598 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:59:15.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 13:59:15.359: INFO: Waiting up to 5m0s for pod "downwardapi-volume-513217a0-f81e-4bed-8ed8-20c556bc57ae" in namespace "projected-937" to be "success or failure" Mar 18 13:59:15.375: INFO: Pod "downwardapi-volume-513217a0-f81e-4bed-8ed8-20c556bc57ae": Phase="Pending", Reason="", readiness=false. Elapsed: 16.728396ms Mar 18 13:59:17.379: INFO: Pod "downwardapi-volume-513217a0-f81e-4bed-8ed8-20c556bc57ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020486957s Mar 18 13:59:19.383: INFO: Pod "downwardapi-volume-513217a0-f81e-4bed-8ed8-20c556bc57ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024814961s STEP: Saw pod success Mar 18 13:59:19.383: INFO: Pod "downwardapi-volume-513217a0-f81e-4bed-8ed8-20c556bc57ae" satisfied condition "success or failure" Mar 18 13:59:19.387: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-513217a0-f81e-4bed-8ed8-20c556bc57ae container client-container: STEP: delete the pod Mar 18 13:59:19.414: INFO: Waiting for pod downwardapi-volume-513217a0-f81e-4bed-8ed8-20c556bc57ae to disappear Mar 18 13:59:19.418: INFO: Pod downwardapi-volume-513217a0-f81e-4bed-8ed8-20c556bc57ae no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:59:19.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-937" for this suite. Mar 18 13:59:25.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:59:25.540: INFO: namespace projected-937 deletion completed in 6.118943969s • [SLOW TEST:10.256 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:59:25.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 18 13:59:25.607: INFO: Waiting up to 5m0s for pod "downward-api-bdab163a-07a9-4283-8d3d-fb57341f16fc" in namespace "downward-api-6733" to be "success or failure" Mar 18 13:59:25.616: INFO: Pod "downward-api-bdab163a-07a9-4283-8d3d-fb57341f16fc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.399898ms Mar 18 13:59:27.621: INFO: Pod "downward-api-bdab163a-07a9-4283-8d3d-fb57341f16fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013999319s Mar 18 13:59:29.625: INFO: Pod "downward-api-bdab163a-07a9-4283-8d3d-fb57341f16fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018227731s STEP: Saw pod success Mar 18 13:59:29.625: INFO: Pod "downward-api-bdab163a-07a9-4283-8d3d-fb57341f16fc" satisfied condition "success or failure" Mar 18 13:59:29.628: INFO: Trying to get logs from node iruya-worker pod downward-api-bdab163a-07a9-4283-8d3d-fb57341f16fc container dapi-container: STEP: delete the pod Mar 18 13:59:29.647: INFO: Waiting for pod downward-api-bdab163a-07a9-4283-8d3d-fb57341f16fc to disappear Mar 18 13:59:29.652: INFO: Pod downward-api-bdab163a-07a9-4283-8d3d-fb57341f16fc no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 13:59:29.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6733" for this suite. Mar 18 13:59:35.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 13:59:35.815: INFO: namespace downward-api-6733 deletion completed in 6.159932335s • [SLOW TEST:10.274 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 13:59:35.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-xkbt STEP: Creating a pod to test atomic-volume-subpath Mar 18 13:59:35.898: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-xkbt" in namespace "subpath-6610" to be "success or failure" Mar 18 13:59:35.902: INFO: Pod "pod-subpath-test-projected-xkbt": Phase="Pending", Reason="", readiness=false. Elapsed: 3.094964ms Mar 18 13:59:37.906: INFO: Pod "pod-subpath-test-projected-xkbt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007192996s Mar 18 13:59:39.910: INFO: Pod "pod-subpath-test-projected-xkbt": Phase="Running", Reason="", readiness=true. Elapsed: 4.011436846s Mar 18 13:59:41.914: INFO: Pod "pod-subpath-test-projected-xkbt": Phase="Running", Reason="", readiness=true. Elapsed: 6.015088188s Mar 18 13:59:43.918: INFO: Pod "pod-subpath-test-projected-xkbt": Phase="Running", Reason="", readiness=true. Elapsed: 8.019447677s Mar 18 13:59:45.922: INFO: Pod "pod-subpath-test-projected-xkbt": Phase="Running", Reason="", readiness=true. Elapsed: 10.02393637s Mar 18 13:59:47.926: INFO: Pod "pod-subpath-test-projected-xkbt": Phase="Running", Reason="", readiness=true. Elapsed: 12.02715333s Mar 18 13:59:49.930: INFO: Pod "pod-subpath-test-projected-xkbt": Phase="Running", Reason="", readiness=true. Elapsed: 14.031378274s Mar 18 13:59:51.935: INFO: Pod "pod-subpath-test-projected-xkbt": Phase="Running", Reason="", readiness=true. Elapsed: 16.036052293s Mar 18 13:59:53.939: INFO: Pod "pod-subpath-test-projected-xkbt": Phase="Running", Reason="", readiness=true. Elapsed: 18.040945774s Mar 18 13:59:55.944: INFO: Pod "pod-subpath-test-projected-xkbt": Phase="Running", Reason="", readiness=true. Elapsed: 20.045262805s Mar 18 13:59:57.948: INFO: Pod "pod-subpath-test-projected-xkbt": Phase="Running", Reason="", readiness=true. Elapsed: 22.049928792s Mar 18 13:59:59.953: INFO: Pod "pod-subpath-test-projected-xkbt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054262804s STEP: Saw pod success Mar 18 13:59:59.953: INFO: Pod "pod-subpath-test-projected-xkbt" satisfied condition "success or failure" Mar 18 13:59:59.956: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-xkbt container test-container-subpath-projected-xkbt: STEP: delete the pod Mar 18 13:59:59.995: INFO: Waiting for pod pod-subpath-test-projected-xkbt to disappear Mar 18 14:00:00.010: INFO: Pod pod-subpath-test-projected-xkbt no longer exists STEP: Deleting pod pod-subpath-test-projected-xkbt Mar 18 14:00:00.010: INFO: Deleting pod "pod-subpath-test-projected-xkbt" in namespace "subpath-6610" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:00:00.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6610" for this suite. Mar 18 14:00:06.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:00:06.114: INFO: namespace subpath-6610 deletion completed in 6.095731969s • [SLOW TEST:30.300 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:00:06.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 18 14:00:06.193: INFO: Waiting up to 5m0s for pod "pod-e1512fb3-0523-4b9e-8844-e2431944be43" in namespace "emptydir-8921" to be "success or failure" Mar 18 14:00:06.203: INFO: Pod "pod-e1512fb3-0523-4b9e-8844-e2431944be43": Phase="Pending", Reason="", readiness=false. Elapsed: 10.411845ms Mar 18 14:00:08.208: INFO: Pod "pod-e1512fb3-0523-4b9e-8844-e2431944be43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014605124s Mar 18 14:00:10.211: INFO: Pod "pod-e1512fb3-0523-4b9e-8844-e2431944be43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018376871s STEP: Saw pod success Mar 18 14:00:10.211: INFO: Pod "pod-e1512fb3-0523-4b9e-8844-e2431944be43" satisfied condition "success or failure" Mar 18 14:00:10.214: INFO: Trying to get logs from node iruya-worker2 pod pod-e1512fb3-0523-4b9e-8844-e2431944be43 container test-container: STEP: delete the pod Mar 18 14:00:10.228: INFO: Waiting for pod pod-e1512fb3-0523-4b9e-8844-e2431944be43 to disappear Mar 18 14:00:10.234: INFO: Pod pod-e1512fb3-0523-4b9e-8844-e2431944be43 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:00:10.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8921" for this suite. Mar 18 14:00:16.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:00:16.332: INFO: namespace emptydir-8921 deletion completed in 6.094935029s • [SLOW TEST:10.217 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:00:16.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 18 14:00:16.389: INFO: Waiting up to 5m0s for pod "downward-api-428f2da7-9398-4560-bef1-12acad5a2991" in namespace "downward-api-1737" to be "success or failure" Mar 18 14:00:16.413: INFO: Pod "downward-api-428f2da7-9398-4560-bef1-12acad5a2991": Phase="Pending", Reason="", readiness=false. Elapsed: 23.976161ms Mar 18 14:00:18.418: INFO: Pod "downward-api-428f2da7-9398-4560-bef1-12acad5a2991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027994408s Mar 18 14:00:20.422: INFO: Pod "downward-api-428f2da7-9398-4560-bef1-12acad5a2991": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032452806s STEP: Saw pod success Mar 18 14:00:20.422: INFO: Pod "downward-api-428f2da7-9398-4560-bef1-12acad5a2991" satisfied condition "success or failure" Mar 18 14:00:20.425: INFO: Trying to get logs from node iruya-worker pod downward-api-428f2da7-9398-4560-bef1-12acad5a2991 container dapi-container: STEP: delete the pod Mar 18 14:00:20.475: INFO: Waiting for pod downward-api-428f2da7-9398-4560-bef1-12acad5a2991 to disappear Mar 18 14:00:20.479: INFO: Pod downward-api-428f2da7-9398-4560-bef1-12acad5a2991 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:00:20.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1737" for this suite. Mar 18 14:00:26.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:00:26.572: INFO: namespace downward-api-1737 deletion completed in 6.089098461s • [SLOW TEST:10.240 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:00:26.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:00:30.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8515" for this suite. Mar 18 14:01:16.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:01:16.757: INFO: namespace kubelet-test-8515 deletion completed in 46.093683372s • [SLOW TEST:50.183 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:01:16.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 14:01:16.800: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a9e4578-9329-4ae6-90a3-f95cb1dbd351" in namespace "projected-9630" to be "success or failure" Mar 18 14:01:16.824: INFO: Pod "downwardapi-volume-5a9e4578-9329-4ae6-90a3-f95cb1dbd351": Phase="Pending", Reason="", readiness=false. Elapsed: 24.25261ms Mar 18 14:01:18.833: INFO: Pod "downwardapi-volume-5a9e4578-9329-4ae6-90a3-f95cb1dbd351": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032812662s Mar 18 14:01:20.837: INFO: Pod "downwardapi-volume-5a9e4578-9329-4ae6-90a3-f95cb1dbd351": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036904436s STEP: Saw pod success Mar 18 14:01:20.837: INFO: Pod "downwardapi-volume-5a9e4578-9329-4ae6-90a3-f95cb1dbd351" satisfied condition "success or failure" Mar 18 14:01:20.839: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5a9e4578-9329-4ae6-90a3-f95cb1dbd351 container client-container: STEP: delete the pod Mar 18 14:01:20.860: INFO: Waiting for pod downwardapi-volume-5a9e4578-9329-4ae6-90a3-f95cb1dbd351 to disappear Mar 18 14:01:20.864: INFO: Pod downwardapi-volume-5a9e4578-9329-4ae6-90a3-f95cb1dbd351 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:01:20.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9630" for this suite. Mar 18 14:01:26.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:01:26.981: INFO: namespace projected-9630 deletion completed in 6.114100491s • [SLOW TEST:10.224 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:01:26.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 14:01:27.083: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 18 14:01:32.087: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 18 14:01:32.087: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 18 14:01:34.091: INFO: Creating deployment "test-rollover-deployment" Mar 18 14:01:34.101: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 18 14:01:36.126: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 18 14:01:36.147: INFO: Ensure that both replica sets have 1 created replica Mar 18 14:01:36.153: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 18 14:01:36.179: INFO: Updating deployment test-rollover-deployment Mar 18 14:01:36.179: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 18 14:01:38.189: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 18 14:01:38.196: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 18 14:01:38.202: INFO: all replica sets need to contain the pod-template-hash label Mar 18 14:01:38.202: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136896, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 14:01:40.211: INFO: all replica sets need to contain the pod-template-hash label Mar 18 14:01:40.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136899, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 14:01:42.210: INFO: all replica sets need to contain the pod-template-hash label Mar 18 14:01:42.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136899, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 14:01:44.211: INFO: all replica sets need to contain the pod-template-hash label Mar 18 14:01:44.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136899, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 14:01:46.210: INFO: all replica sets need to contain the pod-template-hash label Mar 18 14:01:46.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136899, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 14:01:48.210: INFO: all replica sets need to contain the pod-template-hash label Mar 18 14:01:48.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136899, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720136894, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 14:01:50.211: INFO: Mar 18 14:01:50.211: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 18 14:01:50.219: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-4245,SelfLink:/apis/apps/v1/namespaces/deployment-4245/deployments/test-rollover-deployment,UID:7edd490c-6e42-4fc1-bf12-a23da5313ea1,ResourceVersion:528039,Generation:2,CreationTimestamp:2020-03-18 14:01:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-18 14:01:34 +0000 UTC 2020-03-18 14:01:34 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-18 14:01:49 +0000 UTC 2020-03-18 14:01:34 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 18 14:01:50.221: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-4245,SelfLink:/apis/apps/v1/namespaces/deployment-4245/replicasets/test-rollover-deployment-854595fc44,UID:d7cec3c2-e407-4240-80a4-d723ed4719e5,ResourceVersion:528028,Generation:2,CreationTimestamp:2020-03-18 14:01:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7edd490c-6e42-4fc1-bf12-a23da5313ea1 0xc00265e927 0xc00265e928}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 18 14:01:50.221: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 18 14:01:50.222: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-4245,SelfLink:/apis/apps/v1/namespaces/deployment-4245/replicasets/test-rollover-controller,UID:4cc06dfa-1520-42fb-b53a-77d2b2419384,ResourceVersion:528037,Generation:2,CreationTimestamp:2020-03-18 14:01:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7edd490c-6e42-4fc1-bf12-a23da5313ea1 0xc00265e857 0xc00265e858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 18 14:01:50.222: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-4245,SelfLink:/apis/apps/v1/namespaces/deployment-4245/replicasets/test-rollover-deployment-9b8b997cf,UID:71903916-a2d4-4e8c-82e7-6bcd0ff3465e,ResourceVersion:527989,Generation:2,CreationTimestamp:2020-03-18 14:01:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7edd490c-6e42-4fc1-bf12-a23da5313ea1 0xc00265e9f0 0xc00265e9f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 18 14:01:50.224: INFO: Pod "test-rollover-deployment-854595fc44-78nkq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-78nkq,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-4245,SelfLink:/api/v1/namespaces/deployment-4245/pods/test-rollover-deployment-854595fc44-78nkq,UID:b213f4fd-053a-40bf-af0b-76be9831b2a1,ResourceVersion:528005,Generation:0,CreationTimestamp:2020-03-18 14:01:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 d7cec3c2-e407-4240-80a4-d723ed4719e5 0xc00337d7c7 0xc00337d7c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jrgvf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jrgvf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-jrgvf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00337d840} {node.kubernetes.io/unreachable Exists NoExecute 0xc00337d860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:01:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:01:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:01:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:01:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.93,StartTime:2020-03-18 14:01:36 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-18 14:01:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://ce51221a0292b4b0c81225b4ec4ee1c8ab614c836bfd9d044871cb18a849d782}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:01:50.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4245" for this suite. Mar 18 14:01:56.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:01:56.323: INFO: namespace deployment-4245 deletion completed in 6.095725609s • [SLOW TEST:29.341 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:01:56.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 18 14:02:00.935: INFO: Successfully updated pod "annotationupdate2733cdb2-d831-499d-80a2-97c5daaa51d6" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:02:02.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8524" for this suite. Mar 18 14:02:24.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:02:25.056: INFO: namespace projected-8524 deletion completed in 22.088400904s • [SLOW TEST:28.733 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:02:25.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 14:02:25.144: INFO: Create a RollingUpdate DaemonSet Mar 18 14:02:25.147: INFO: Check that daemon pods launch on every node of the cluster Mar 18 14:02:25.153: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:02:25.170: INFO: Number of nodes with available pods: 0 Mar 18 14:02:25.170: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:02:26.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:02:26.186: INFO: Number of nodes with available pods: 0 Mar 18 14:02:26.186: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:02:27.267: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:02:27.270: INFO: Number of nodes with available pods: 0 Mar 18 14:02:27.270: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:02:28.174: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:02:28.177: INFO: Number of nodes with available pods: 0 Mar 18 14:02:28.177: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:02:29.175: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:02:29.179: INFO: Number of nodes with available pods: 2 Mar 18 14:02:29.179: INFO: Number of running nodes: 2, number of available pods: 2 Mar 18 14:02:29.179: INFO: Update the DaemonSet to trigger a rollout Mar 18 14:02:29.185: INFO: Updating DaemonSet daemon-set Mar 18 14:02:33.202: INFO: Roll back the DaemonSet before rollout is complete Mar 18 14:02:33.209: INFO: Updating DaemonSet daemon-set Mar 18 14:02:33.209: INFO: Make sure DaemonSet rollback is complete Mar 18 14:02:33.216: INFO: Wrong image for pod: daemon-set-h5swv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 18 14:02:33.216: INFO: Pod daemon-set-h5swv is not available Mar 18 14:02:33.223: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:02:34.228: INFO: Wrong image for pod: daemon-set-h5swv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 18 14:02:34.228: INFO: Pod daemon-set-h5swv is not available Mar 18 14:02:34.231: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:02:35.227: INFO: Pod daemon-set-tfgzt is not available Mar 18 14:02:35.230: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3618, will wait for the garbage collector to delete the pods Mar 18 14:02:35.292: INFO: Deleting DaemonSet.extensions daemon-set took: 5.481484ms Mar 18 14:02:35.593: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.376768ms Mar 18 14:02:38.301: INFO: Number of nodes with available pods: 0 Mar 18 14:02:38.301: INFO: Number of running nodes: 0, number of available pods: 0 Mar 18 14:02:38.303: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3618/daemonsets","resourceVersion":"528271"},"items":null} Mar 18 14:02:38.305: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3618/pods","resourceVersion":"528271"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:02:38.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3618" for this suite. Mar 18 14:02:44.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:02:44.419: INFO: namespace daemonsets-3618 deletion completed in 6.101757244s • [SLOW TEST:19.363 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:02:44.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 14:02:44.495: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5c2e0fc-8603-4424-92d3-6253cfbd0b14" in namespace "downward-api-9615" to be "success or failure" Mar 18 14:02:44.506: INFO: Pod "downwardapi-volume-a5c2e0fc-8603-4424-92d3-6253cfbd0b14": Phase="Pending", Reason="", readiness=false. Elapsed: 11.861514ms Mar 18 14:02:46.517: INFO: Pod "downwardapi-volume-a5c2e0fc-8603-4424-92d3-6253cfbd0b14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022353429s Mar 18 14:02:48.521: INFO: Pod "downwardapi-volume-a5c2e0fc-8603-4424-92d3-6253cfbd0b14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026176317s STEP: Saw pod success Mar 18 14:02:48.521: INFO: Pod "downwardapi-volume-a5c2e0fc-8603-4424-92d3-6253cfbd0b14" satisfied condition "success or failure" Mar 18 14:02:48.524: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a5c2e0fc-8603-4424-92d3-6253cfbd0b14 container client-container: STEP: delete the pod Mar 18 14:02:48.538: INFO: Waiting for pod downwardapi-volume-a5c2e0fc-8603-4424-92d3-6253cfbd0b14 to disappear Mar 18 14:02:48.542: INFO: Pod downwardapi-volume-a5c2e0fc-8603-4424-92d3-6253cfbd0b14 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:02:48.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9615" for this suite. Mar 18 14:02:54.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:02:54.633: INFO: namespace downward-api-9615 deletion completed in 6.088215045s • [SLOW TEST:10.214 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:02:54.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 18 14:02:54.695: INFO: Waiting up to 5m0s for pod "pod-3a041f07-fcfc-4de1-a693-a4a831100770" in namespace "emptydir-1425" to be "success or failure" Mar 18 14:02:54.699: INFO: Pod "pod-3a041f07-fcfc-4de1-a693-a4a831100770": Phase="Pending", Reason="", readiness=false. Elapsed: 3.708313ms Mar 18 14:02:56.702: INFO: Pod "pod-3a041f07-fcfc-4de1-a693-a4a831100770": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007352937s Mar 18 14:02:58.707: INFO: Pod "pod-3a041f07-fcfc-4de1-a693-a4a831100770": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011802244s STEP: Saw pod success Mar 18 14:02:58.707: INFO: Pod "pod-3a041f07-fcfc-4de1-a693-a4a831100770" satisfied condition "success or failure" Mar 18 14:02:58.710: INFO: Trying to get logs from node iruya-worker pod pod-3a041f07-fcfc-4de1-a693-a4a831100770 container test-container: STEP: delete the pod Mar 18 14:02:58.745: INFO: Waiting for pod pod-3a041f07-fcfc-4de1-a693-a4a831100770 to disappear Mar 18 14:02:58.758: INFO: Pod pod-3a041f07-fcfc-4de1-a693-a4a831100770 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:02:58.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1425" for this suite. Mar 18 14:03:04.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:03:04.852: INFO: namespace emptydir-1425 deletion completed in 6.090974846s • [SLOW TEST:10.219 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:03:04.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 14:03:04.939: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad1c091d-b5c5-4c88-ab7c-5047ec6ca281" in namespace "downward-api-164" to be "success or failure" Mar 18 14:03:04.961: INFO: Pod "downwardapi-volume-ad1c091d-b5c5-4c88-ab7c-5047ec6ca281": Phase="Pending", Reason="", readiness=false. Elapsed: 22.380327ms Mar 18 14:03:07.020: INFO: Pod "downwardapi-volume-ad1c091d-b5c5-4c88-ab7c-5047ec6ca281": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081273898s Mar 18 14:03:09.023: INFO: Pod "downwardapi-volume-ad1c091d-b5c5-4c88-ab7c-5047ec6ca281": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084446874s STEP: Saw pod success Mar 18 14:03:09.024: INFO: Pod "downwardapi-volume-ad1c091d-b5c5-4c88-ab7c-5047ec6ca281" satisfied condition "success or failure" Mar 18 14:03:09.026: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ad1c091d-b5c5-4c88-ab7c-5047ec6ca281 container client-container: STEP: delete the pod Mar 18 14:03:09.070: INFO: Waiting for pod downwardapi-volume-ad1c091d-b5c5-4c88-ab7c-5047ec6ca281 to disappear Mar 18 14:03:09.085: INFO: Pod downwardapi-volume-ad1c091d-b5c5-4c88-ab7c-5047ec6ca281 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:03:09.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-164" for this suite. Mar 18 14:03:15.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:03:15.193: INFO: namespace downward-api-164 deletion completed in 6.105641056s • [SLOW TEST:10.341 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:03:15.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:04:15.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8761" for this suite. Mar 18 14:04:37.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:04:37.404: INFO: namespace container-probe-8761 deletion completed in 22.123749571s • [SLOW TEST:82.210 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:04:37.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 18 14:04:47.538: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6701 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 14:04:47.538: INFO: >>> kubeConfig: /root/.kube/config I0318 14:04:47.579369 6 log.go:172] (0xc002de2790) (0xc0030cd680) Create stream I0318 14:04:47.579400 6 log.go:172] (0xc002de2790) (0xc0030cd680) Stream added, broadcasting: 1 I0318 14:04:47.585869 6 log.go:172] (0xc002de2790) Reply frame received for 1 I0318 14:04:47.585910 6 log.go:172] (0xc002de2790) (0xc002135ae0) Create stream I0318 14:04:47.585923 6 log.go:172] (0xc002de2790) (0xc002135ae0) Stream added, broadcasting: 3 I0318 14:04:47.587126 6 log.go:172] (0xc002de2790) Reply frame received for 3 I0318 14:04:47.587176 6 log.go:172] (0xc002de2790) (0xc0030cd720) Create stream I0318 14:04:47.587197 6 log.go:172] (0xc002de2790) (0xc0030cd720) Stream added, broadcasting: 5 I0318 14:04:47.588333 6 log.go:172] (0xc002de2790) Reply frame received for 5 I0318 14:04:47.637562 6 log.go:172] (0xc002de2790) Data frame received for 3 I0318 14:04:47.637586 6 log.go:172] (0xc002135ae0) (3) Data frame handling I0318 14:04:47.637593 6 log.go:172] (0xc002135ae0) (3) Data frame sent I0318 14:04:47.637599 6 log.go:172] (0xc002de2790) Data frame received for 3 I0318 14:04:47.637603 6 log.go:172] (0xc002135ae0) (3) Data frame handling I0318 14:04:47.637686 6 log.go:172] (0xc002de2790) Data frame received for 5 I0318 14:04:47.637712 6 log.go:172] (0xc0030cd720) (5) Data frame handling I0318 14:04:47.639496 6 log.go:172] (0xc002de2790) Data frame received for 1 I0318 14:04:47.639508 6 log.go:172] (0xc0030cd680) (1) Data frame handling I0318 14:04:47.639519 6 log.go:172] (0xc0030cd680) (1) Data frame sent I0318 14:04:47.639526 6 log.go:172] (0xc002de2790) (0xc0030cd680) Stream removed, broadcasting: 1 I0318 14:04:47.639574 6 log.go:172] (0xc002de2790) Go away received I0318 14:04:47.639686 6 log.go:172] (0xc002de2790) (0xc0030cd680) Stream removed, broadcasting: 1 I0318 14:04:47.639705 6 log.go:172] (0xc002de2790) (0xc002135ae0) Stream removed, broadcasting: 3 I0318 14:04:47.639714 6 log.go:172] (0xc002de2790) (0xc0030cd720) Stream removed, broadcasting: 5 Mar 18 14:04:47.639: INFO: Exec stderr: "" Mar 18 14:04:47.639: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6701 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 14:04:47.639: INFO: >>> kubeConfig: /root/.kube/config I0318 14:04:47.668153 6 log.go:172] (0xc001b311e0) (0xc002135ea0) Create stream I0318 14:04:47.668187 6 log.go:172] (0xc001b311e0) (0xc002135ea0) Stream added, broadcasting: 1 I0318 14:04:47.670832 6 log.go:172] (0xc001b311e0) Reply frame received for 1 I0318 14:04:47.670886 6 log.go:172] (0xc001b311e0) (0xc002135f40) Create stream I0318 14:04:47.670905 6 log.go:172] (0xc001b311e0) (0xc002135f40) Stream added, broadcasting: 3 I0318 14:04:47.671690 6 log.go:172] (0xc001b311e0) Reply frame received for 3 I0318 14:04:47.671732 6 log.go:172] (0xc001b311e0) (0xc0003b4320) Create stream I0318 14:04:47.671744 6 log.go:172] (0xc001b311e0) (0xc0003b4320) Stream added, broadcasting: 5 I0318 14:04:47.672575 6 log.go:172] (0xc001b311e0) Reply frame received for 5 I0318 14:04:47.742553 6 log.go:172] (0xc001b311e0) Data frame received for 3 I0318 14:04:47.742581 6 log.go:172] (0xc002135f40) (3) Data frame handling I0318 14:04:47.742590 6 log.go:172] (0xc002135f40) (3) Data frame sent I0318 14:04:47.742595 6 log.go:172] (0xc001b311e0) Data frame received for 3 I0318 14:04:47.742601 6 log.go:172] (0xc002135f40) (3) Data frame handling I0318 14:04:47.742625 6 log.go:172] (0xc001b311e0) Data frame received for 5 I0318 14:04:47.742637 6 log.go:172] (0xc0003b4320) (5) Data frame handling I0318 14:04:47.744384 6 log.go:172] (0xc001b311e0) Data frame received for 1 I0318 14:04:47.744411 6 log.go:172] (0xc002135ea0) (1) Data frame handling I0318 14:04:47.744429 6 log.go:172] (0xc002135ea0) (1) Data frame sent I0318 14:04:47.744458 6 log.go:172] (0xc001b311e0) (0xc002135ea0) Stream removed, broadcasting: 1 I0318 14:04:47.744483 6 log.go:172] (0xc001b311e0) Go away received I0318 14:04:47.744577 6 log.go:172] (0xc001b311e0) (0xc002135ea0) Stream removed, broadcasting: 1 I0318 14:04:47.744598 6 log.go:172] (0xc001b311e0) (0xc002135f40) Stream removed, broadcasting: 3 I0318 14:04:47.744616 6 log.go:172] (0xc001b311e0) (0xc0003b4320) Stream removed, broadcasting: 5 Mar 18 14:04:47.744: INFO: Exec stderr: "" Mar 18 14:04:47.744: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6701 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 14:04:47.744: INFO: >>> kubeConfig: /root/.kube/config I0318 14:04:47.777930 6 log.go:172] (0xc002078160) (0xc001c52320) Create stream I0318 14:04:47.777956 6 log.go:172] (0xc002078160) (0xc001c52320) Stream added, broadcasting: 1 I0318 14:04:47.780446 6 log.go:172] (0xc002078160) Reply frame received for 1 I0318 14:04:47.780492 6 log.go:172] (0xc002078160) (0xc0012e6e60) Create stream I0318 14:04:47.780502 6 log.go:172] (0xc002078160) (0xc0012e6e60) Stream added, broadcasting: 3 I0318 14:04:47.781650 6 log.go:172] (0xc002078160) Reply frame received for 3 I0318 14:04:47.781680 6 log.go:172] (0xc002078160) (0xc0030cd7c0) Create stream I0318 14:04:47.781688 6 log.go:172] (0xc002078160) (0xc0030cd7c0) Stream added, broadcasting: 5 I0318 14:04:47.782522 6 log.go:172] (0xc002078160) Reply frame received for 5 I0318 14:04:47.856408 6 log.go:172] (0xc002078160) Data frame received for 5 I0318 14:04:47.856430 6 log.go:172] (0xc0030cd7c0) (5) Data frame handling I0318 14:04:47.856451 6 log.go:172] (0xc002078160) Data frame received for 3 I0318 14:04:47.856487 6 log.go:172] (0xc0012e6e60) (3) Data frame handling I0318 14:04:47.856525 6 log.go:172] (0xc0012e6e60) (3) Data frame sent I0318 14:04:47.856545 6 log.go:172] (0xc002078160) Data frame received for 3 I0318 14:04:47.856558 6 log.go:172] (0xc0012e6e60) (3) Data frame handling I0318 14:04:47.858179 6 log.go:172] (0xc002078160) Data frame received for 1 I0318 14:04:47.858206 6 log.go:172] (0xc001c52320) (1) Data frame handling I0318 14:04:47.858228 6 log.go:172] (0xc001c52320) (1) Data frame sent I0318 14:04:47.858247 6 log.go:172] (0xc002078160) (0xc001c52320) Stream removed, broadcasting: 1 I0318 14:04:47.858334 6 log.go:172] (0xc002078160) (0xc001c52320) Stream removed, broadcasting: 1 I0318 14:04:47.858346 6 log.go:172] (0xc002078160) (0xc0012e6e60) Stream removed, broadcasting: 3 I0318 14:04:47.858485 6 log.go:172] (0xc002078160) (0xc0030cd7c0) Stream removed, broadcasting: 5 Mar 18 14:04:47.858: INFO: Exec stderr: "" I0318 14:04:47.858651 6 log.go:172] (0xc002078160) Go away received Mar 18 14:04:47.858: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6701 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 14:04:47.858: INFO: >>> kubeConfig: /root/.kube/config I0318 14:04:47.892642 6 log.go:172] (0xc002079080) (0xc001c526e0) Create stream I0318 14:04:47.892668 6 log.go:172] (0xc002079080) (0xc001c526e0) Stream added, broadcasting: 1 I0318 14:04:47.895642 6 log.go:172] (0xc002079080) Reply frame received for 1 I0318 14:04:47.895676 6 log.go:172] (0xc002079080) (0xc0012e7180) Create stream I0318 14:04:47.895689 6 log.go:172] (0xc002079080) (0xc0012e7180) Stream added, broadcasting: 3 I0318 14:04:47.897868 6 log.go:172] (0xc002079080) Reply frame received for 3 I0318 14:04:47.897910 6 log.go:172] (0xc002079080) (0xc0019d0aa0) Create stream I0318 14:04:47.897922 6 log.go:172] (0xc002079080) (0xc0019d0aa0) Stream added, broadcasting: 5 I0318 14:04:47.898966 6 log.go:172] (0xc002079080) Reply frame received for 5 I0318 14:04:47.960750 6 log.go:172] (0xc002079080) Data frame received for 5 I0318 14:04:47.960800 6 log.go:172] (0xc0019d0aa0) (5) Data frame handling I0318 14:04:47.960836 6 log.go:172] (0xc002079080) Data frame received for 3 I0318 14:04:47.960856 6 log.go:172] (0xc0012e7180) (3) Data frame handling I0318 14:04:47.960883 6 log.go:172] (0xc0012e7180) (3) Data frame sent I0318 14:04:47.960901 6 log.go:172] (0xc002079080) Data frame received for 3 I0318 14:04:47.960918 6 log.go:172] (0xc0012e7180) (3) Data frame handling I0318 14:04:47.962436 6 log.go:172] (0xc002079080) Data frame received for 1 I0318 14:04:47.962456 6 log.go:172] (0xc001c526e0) (1) Data frame handling I0318 14:04:47.962470 6 log.go:172] (0xc001c526e0) (1) Data frame sent I0318 14:04:47.962678 6 log.go:172] (0xc002079080) (0xc001c526e0) Stream removed, broadcasting: 1 I0318 14:04:47.962722 6 log.go:172] (0xc002079080) Go away received I0318 14:04:47.962834 6 log.go:172] (0xc002079080) (0xc001c526e0) Stream removed, broadcasting: 1 I0318 14:04:47.962863 6 log.go:172] (0xc002079080) (0xc0012e7180) Stream removed, broadcasting: 3 I0318 14:04:47.962880 6 log.go:172] (0xc002079080) (0xc0019d0aa0) Stream removed, broadcasting: 5 Mar 18 14:04:47.962: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 18 14:04:47.962: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6701 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 14:04:47.963: INFO: >>> kubeConfig: /root/.kube/config I0318 14:04:47.990264 6 log.go:172] (0xc0014dce70) (0xc0019d0fa0) Create stream I0318 14:04:47.990298 6 log.go:172] (0xc0014dce70) (0xc0019d0fa0) Stream added, broadcasting: 1 I0318 14:04:47.992765 6 log.go:172] (0xc0014dce70) Reply frame received for 1 I0318 14:04:47.992827 6 log.go:172] (0xc0014dce70) (0xc001c52780) Create stream I0318 14:04:47.992835 6 log.go:172] (0xc0014dce70) (0xc001c52780) Stream added, broadcasting: 3 I0318 14:04:47.993796 6 log.go:172] (0xc0014dce70) Reply frame received for 3 I0318 14:04:47.993884 6 log.go:172] (0xc0014dce70) (0xc0030cd860) Create stream I0318 14:04:47.993897 6 log.go:172] (0xc0014dce70) (0xc0030cd860) Stream added, broadcasting: 5 I0318 14:04:47.994631 6 log.go:172] (0xc0014dce70) Reply frame received for 5 I0318 14:04:48.044049 6 log.go:172] (0xc0014dce70) Data frame received for 5 I0318 14:04:48.044093 6 log.go:172] (0xc0030cd860) (5) Data frame handling I0318 14:04:48.044123 6 log.go:172] (0xc0014dce70) Data frame received for 3 I0318 14:04:48.044142 6 log.go:172] (0xc001c52780) (3) Data frame handling I0318 14:04:48.044165 6 log.go:172] (0xc001c52780) (3) Data frame sent I0318 14:04:48.044182 6 log.go:172] (0xc0014dce70) Data frame received for 3 I0318 14:04:48.044196 6 log.go:172] (0xc001c52780) (3) Data frame handling I0318 14:04:48.045598 6 log.go:172] (0xc0014dce70) Data frame received for 1 I0318 14:04:48.045621 6 log.go:172] (0xc0019d0fa0) (1) Data frame handling I0318 14:04:48.045639 6 log.go:172] (0xc0019d0fa0) (1) Data frame sent I0318 14:04:48.045655 6 log.go:172] (0xc0014dce70) (0xc0019d0fa0) Stream removed, broadcasting: 1 I0318 14:04:48.045718 6 log.go:172] (0xc0014dce70) Go away received I0318 14:04:48.045758 6 log.go:172] (0xc0014dce70) (0xc0019d0fa0) Stream removed, broadcasting: 1 I0318 14:04:48.045779 6 log.go:172] (0xc0014dce70) (0xc001c52780) Stream removed, broadcasting: 3 I0318 14:04:48.045797 6 log.go:172] (0xc0014dce70) (0xc0030cd860) Stream removed, broadcasting: 5 Mar 18 14:04:48.045: INFO: Exec stderr: "" Mar 18 14:04:48.045: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6701 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 14:04:48.045: INFO: >>> kubeConfig: /root/.kube/config I0318 14:04:48.081028 6 log.go:172] (0xc0014ddef0) (0xc0019d1400) Create stream I0318 14:04:48.081054 6 log.go:172] (0xc0014ddef0) (0xc0019d1400) Stream added, broadcasting: 1 I0318 14:04:48.084703 6 log.go:172] (0xc0014ddef0) Reply frame received for 1 I0318 14:04:48.084761 6 log.go:172] (0xc0014ddef0) (0xc0003b4500) Create stream I0318 14:04:48.084777 6 log.go:172] (0xc0014ddef0) (0xc0003b4500) Stream added, broadcasting: 3 I0318 14:04:48.086084 6 log.go:172] (0xc0014ddef0) Reply frame received for 3 I0318 14:04:48.086133 6 log.go:172] (0xc0014ddef0) (0xc0003b45a0) Create stream I0318 14:04:48.086204 6 log.go:172] (0xc0014ddef0) (0xc0003b45a0) Stream added, broadcasting: 5 I0318 14:04:48.087304 6 log.go:172] (0xc0014ddef0) Reply frame received for 5 I0318 14:04:48.164662 6 log.go:172] (0xc0014ddef0) Data frame received for 3 I0318 14:04:48.164715 6 log.go:172] (0xc0003b4500) (3) Data frame handling I0318 14:04:48.164739 6 log.go:172] (0xc0003b4500) (3) Data frame sent I0318 14:04:48.164757 6 log.go:172] (0xc0014ddef0) Data frame received for 3 I0318 14:04:48.164772 6 log.go:172] (0xc0003b4500) (3) Data frame handling I0318 14:04:48.164815 6 log.go:172] (0xc0014ddef0) Data frame received for 5 I0318 14:04:48.164845 6 log.go:172] (0xc0003b45a0) (5) Data frame handling I0318 14:04:48.166297 6 log.go:172] (0xc0014ddef0) Data frame received for 1 I0318 14:04:48.166328 6 log.go:172] (0xc0019d1400) (1) Data frame handling I0318 14:04:48.166348 6 log.go:172] (0xc0019d1400) (1) Data frame sent I0318 14:04:48.166399 6 log.go:172] (0xc0014ddef0) (0xc0019d1400) Stream removed, broadcasting: 1 I0318 14:04:48.166437 6 log.go:172] (0xc0014ddef0) Go away received I0318 14:04:48.166523 6 log.go:172] (0xc0014ddef0) (0xc0019d1400) Stream removed, broadcasting: 1 I0318 14:04:48.166542 6 log.go:172] (0xc0014ddef0) (0xc0003b4500) Stream removed, broadcasting: 3 I0318 14:04:48.166555 6 log.go:172] (0xc0014ddef0) (0xc0003b45a0) Stream removed, broadcasting: 5 Mar 18 14:04:48.166: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 18 14:04:48.166: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6701 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 14:04:48.166: INFO: >>> kubeConfig: /root/.kube/config I0318 14:04:48.204128 6 log.go:172] (0xc002de3760) (0xc0030cdb80) Create stream I0318 14:04:48.204178 6 log.go:172] (0xc002de3760) (0xc0030cdb80) Stream added, broadcasting: 1 I0318 14:04:48.210186 6 log.go:172] (0xc002de3760) Reply frame received for 1 I0318 14:04:48.210236 6 log.go:172] (0xc002de3760) (0xc0003b4640) Create stream I0318 14:04:48.210279 6 log.go:172] (0xc002de3760) (0xc0003b4640) Stream added, broadcasting: 3 I0318 14:04:48.211803 6 log.go:172] (0xc002de3760) Reply frame received for 3 I0318 14:04:48.211837 6 log.go:172] (0xc002de3760) (0xc0019d1680) Create stream I0318 14:04:48.211862 6 log.go:172] (0xc002de3760) (0xc0019d1680) Stream added, broadcasting: 5 I0318 14:04:48.213325 6 log.go:172] (0xc002de3760) Reply frame received for 5 I0318 14:04:48.280564 6 log.go:172] (0xc002de3760) Data frame received for 3 I0318 14:04:48.280600 6 log.go:172] (0xc0003b4640) (3) Data frame handling I0318 14:04:48.280622 6 log.go:172] (0xc0003b4640) (3) Data frame sent I0318 14:04:48.280632 6 log.go:172] (0xc002de3760) Data frame received for 3 I0318 14:04:48.280649 6 log.go:172] (0xc0003b4640) (3) Data frame handling I0318 14:04:48.280785 6 log.go:172] (0xc002de3760) Data frame received for 5 I0318 14:04:48.280801 6 log.go:172] (0xc0019d1680) (5) Data frame handling I0318 14:04:48.282441 6 log.go:172] (0xc002de3760) Data frame received for 1 I0318 14:04:48.282485 6 log.go:172] (0xc0030cdb80) (1) Data frame handling I0318 14:04:48.282524 6 log.go:172] (0xc0030cdb80) (1) Data frame sent I0318 14:04:48.282552 6 log.go:172] (0xc002de3760) (0xc0030cdb80) Stream removed, broadcasting: 1 I0318 14:04:48.282584 6 log.go:172] (0xc002de3760) Go away received I0318 14:04:48.282726 6 log.go:172] (0xc002de3760) (0xc0030cdb80) Stream removed, broadcasting: 1 I0318 14:04:48.282754 6 log.go:172] (0xc002de3760) (0xc0003b4640) Stream removed, broadcasting: 3 I0318 14:04:48.282769 6 log.go:172] (0xc002de3760) (0xc0019d1680) Stream removed, broadcasting: 5 Mar 18 14:04:48.282: INFO: Exec stderr: "" Mar 18 14:04:48.282: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6701 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 14:04:48.282: INFO: >>> kubeConfig: /root/.kube/config I0318 14:04:48.314467 6 log.go:172] (0xc0029506e0) (0xc0003b4f00) Create stream I0318 14:04:48.314491 6 log.go:172] (0xc0029506e0) (0xc0003b4f00) Stream added, broadcasting: 1 I0318 14:04:48.316700 6 log.go:172] (0xc0029506e0) Reply frame received for 1 I0318 14:04:48.316752 6 log.go:172] (0xc0029506e0) (0xc001c52820) Create stream I0318 14:04:48.316772 6 log.go:172] (0xc0029506e0) (0xc001c52820) Stream added, broadcasting: 3 I0318 14:04:48.317991 6 log.go:172] (0xc0029506e0) Reply frame received for 3 I0318 14:04:48.318038 6 log.go:172] (0xc0029506e0) (0xc0019d17c0) Create stream I0318 14:04:48.318052 6 log.go:172] (0xc0029506e0) (0xc0019d17c0) Stream added, broadcasting: 5 I0318 14:04:48.319323 6 log.go:172] (0xc0029506e0) Reply frame received for 5 I0318 14:04:48.371449 6 log.go:172] (0xc0029506e0) Data frame received for 3 I0318 14:04:48.371467 6 log.go:172] (0xc001c52820) (3) Data frame handling I0318 14:04:48.371474 6 log.go:172] (0xc001c52820) (3) Data frame sent I0318 14:04:48.371479 6 log.go:172] (0xc0029506e0) Data frame received for 3 I0318 14:04:48.371483 6 log.go:172] (0xc001c52820) (3) Data frame handling I0318 14:04:48.371509 6 log.go:172] (0xc0029506e0) Data frame received for 5 I0318 14:04:48.371540 6 log.go:172] (0xc0019d17c0) (5) Data frame handling I0318 14:04:48.373031 6 log.go:172] (0xc0029506e0) Data frame received for 1 I0318 14:04:48.373063 6 log.go:172] (0xc0003b4f00) (1) Data frame handling I0318 14:04:48.373347 6 log.go:172] (0xc0003b4f00) (1) Data frame sent I0318 14:04:48.373432 6 log.go:172] (0xc0029506e0) (0xc0003b4f00) Stream removed, broadcasting: 1 I0318 14:04:48.373485 6 log.go:172] (0xc0029506e0) Go away received I0318 14:04:48.373584 6 log.go:172] (0xc0029506e0) (0xc0003b4f00) Stream removed, broadcasting: 1 I0318 14:04:48.373614 6 log.go:172] (0xc0029506e0) (0xc001c52820) Stream removed, broadcasting: 3 I0318 14:04:48.373627 6 log.go:172] (0xc0029506e0) (0xc0019d17c0) Stream removed, broadcasting: 5 Mar 18 14:04:48.373: INFO: Exec stderr: "" Mar 18 14:04:48.373: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6701 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 14:04:48.373: INFO: >>> kubeConfig: /root/.kube/config I0318 14:04:48.413917 6 log.go:172] (0xc0027b16b0) (0xc0019d1b80) Create stream I0318 14:04:48.413944 6 log.go:172] (0xc0027b16b0) (0xc0019d1b80) Stream added, broadcasting: 1 I0318 14:04:48.416128 6 log.go:172] (0xc0027b16b0) Reply frame received for 1 I0318 14:04:48.416174 6 log.go:172] (0xc0027b16b0) (0xc0030cdc20) Create stream I0318 14:04:48.416186 6 log.go:172] (0xc0027b16b0) (0xc0030cdc20) Stream added, broadcasting: 3 I0318 14:04:48.416940 6 log.go:172] (0xc0027b16b0) Reply frame received for 3 I0318 14:04:48.416981 6 log.go:172] (0xc0027b16b0) (0xc0012e7220) Create stream I0318 14:04:48.416995 6 log.go:172] (0xc0027b16b0) (0xc0012e7220) Stream added, broadcasting: 5 I0318 14:04:48.418235 6 log.go:172] (0xc0027b16b0) Reply frame received for 5 I0318 14:04:48.480079 6 log.go:172] (0xc0027b16b0) Data frame received for 3 I0318 14:04:48.480135 6 log.go:172] (0xc0030cdc20) (3) Data frame handling I0318 14:04:48.480171 6 log.go:172] (0xc0030cdc20) (3) Data frame sent I0318 14:04:48.480197 6 log.go:172] (0xc0027b16b0) Data frame received for 3 I0318 14:04:48.480212 6 log.go:172] (0xc0030cdc20) (3) Data frame handling I0318 14:04:48.480245 6 log.go:172] (0xc0027b16b0) Data frame received for 5 I0318 14:04:48.480271 6 log.go:172] (0xc0012e7220) (5) Data frame handling I0318 14:04:48.481739 6 log.go:172] (0xc0027b16b0) Data frame received for 1 I0318 14:04:48.481776 6 log.go:172] (0xc0019d1b80) (1) Data frame handling I0318 14:04:48.481839 6 log.go:172] (0xc0019d1b80) (1) Data frame sent I0318 14:04:48.481899 6 log.go:172] (0xc0027b16b0) (0xc0019d1b80) Stream removed, broadcasting: 1 I0318 14:04:48.481926 6 log.go:172] (0xc0027b16b0) Go away received I0318 14:04:48.482087 6 log.go:172] (0xc0027b16b0) (0xc0019d1b80) Stream removed, broadcasting: 1 I0318 14:04:48.482140 6 log.go:172] (0xc0027b16b0) (0xc0030cdc20) Stream removed, broadcasting: 3 I0318 14:04:48.482162 6 log.go:172] (0xc0027b16b0) (0xc0012e7220) Stream removed, broadcasting: 5 Mar 18 14:04:48.482: INFO: Exec stderr: "" Mar 18 14:04:48.482: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6701 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 14:04:48.482: INFO: >>> kubeConfig: /root/.kube/config I0318 14:04:48.523663 6 log.go:172] (0xc002b5e790) (0xc001c52c80) Create stream I0318 14:04:48.523699 6 log.go:172] (0xc002b5e790) (0xc001c52c80) Stream added, broadcasting: 1 I0318 14:04:48.526350 6 log.go:172] (0xc002b5e790) Reply frame received for 1 I0318 14:04:48.526405 6 log.go:172] (0xc002b5e790) (0xc0012e72c0) Create stream I0318 14:04:48.526423 6 log.go:172] (0xc002b5e790) (0xc0012e72c0) Stream added, broadcasting: 3 I0318 14:04:48.527510 6 log.go:172] (0xc002b5e790) Reply frame received for 3 I0318 14:04:48.527553 6 log.go:172] (0xc002b5e790) (0xc0012e7400) Create stream I0318 14:04:48.527568 6 log.go:172] (0xc002b5e790) (0xc0012e7400) Stream added, broadcasting: 5 I0318 14:04:48.528470 6 log.go:172] (0xc002b5e790) Reply frame received for 5 I0318 14:04:48.580783 6 log.go:172] (0xc002b5e790) Data frame received for 5 I0318 14:04:48.580828 6 log.go:172] (0xc0012e7400) (5) Data frame handling I0318 14:04:48.580856 6 log.go:172] (0xc002b5e790) Data frame received for 3 I0318 14:04:48.580870 6 log.go:172] (0xc0012e72c0) (3) Data frame handling I0318 14:04:48.580895 6 log.go:172] (0xc0012e72c0) (3) Data frame sent I0318 14:04:48.580912 6 log.go:172] (0xc002b5e790) Data frame received for 3 I0318 14:04:48.580925 6 log.go:172] (0xc0012e72c0) (3) Data frame handling I0318 14:04:48.582729 6 log.go:172] (0xc002b5e790) Data frame received for 1 I0318 14:04:48.582767 6 log.go:172] (0xc001c52c80) (1) Data frame handling I0318 14:04:48.582804 6 log.go:172] (0xc001c52c80) (1) Data frame sent I0318 14:04:48.582825 6 log.go:172] (0xc002b5e790) (0xc001c52c80) Stream removed, broadcasting: 1 I0318 14:04:48.582848 6 log.go:172] (0xc002b5e790) Go away received I0318 14:04:48.583030 6 log.go:172] (0xc002b5e790) (0xc001c52c80) Stream removed, broadcasting: 1 I0318 14:04:48.583052 6 log.go:172] (0xc002b5e790) (0xc0012e72c0) Stream removed, broadcasting: 3 I0318 14:04:48.583070 6 log.go:172] (0xc002b5e790) (0xc0012e7400) Stream removed, broadcasting: 5 Mar 18 14:04:48.583: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:04:48.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6701" for this suite. Mar 18 14:05:34.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:05:34.685: INFO: namespace e2e-kubelet-etc-hosts-6701 deletion completed in 46.097931037s • [SLOW TEST:57.281 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:05:34.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 14:05:34.714: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 18 14:05:34.750: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 18 14:05:39.753: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 18 14:05:39.753: INFO: Creating deployment "test-rolling-update-deployment" Mar 18 14:05:39.757: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 18 14:05:39.776: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 18 14:05:41.784: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 18 14:05:41.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720137139, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720137139, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720137139, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720137139, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 18 14:05:43.791: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 18 14:05:43.801: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-2317,SelfLink:/apis/apps/v1/namespaces/deployment-2317/deployments/test-rolling-update-deployment,UID:79f9f73f-bb52-41d6-bc3d-619a1ef7bbec,ResourceVersion:528860,Generation:1,CreationTimestamp:2020-03-18 14:05:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-18 14:05:39 +0000 UTC 2020-03-18 14:05:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-18 14:05:42 +0000 UTC 2020-03-18 14:05:39 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 18 14:05:43.805: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-2317,SelfLink:/apis/apps/v1/namespaces/deployment-2317/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:f6260f5d-a7db-457b-a938-e25547df4ada,ResourceVersion:528849,Generation:1,CreationTimestamp:2020-03-18 14:05:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 79f9f73f-bb52-41d6-bc3d-619a1ef7bbec 0xc002c1fcc7 0xc002c1fcc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 18 14:05:43.805: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 18 14:05:43.805: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-2317,SelfLink:/apis/apps/v1/namespaces/deployment-2317/replicasets/test-rolling-update-controller,UID:798b7b11-267f-42d9-8f54-8d2a28cb662a,ResourceVersion:528858,Generation:2,CreationTimestamp:2020-03-18 14:05:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 79f9f73f-bb52-41d6-bc3d-619a1ef7bbec 0xc002c1fbf7 0xc002c1fbf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 18 14:05:43.809: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-vx5nt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-vx5nt,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-2317,SelfLink:/api/v1/namespaces/deployment-2317/pods/test-rolling-update-deployment-79f6b9d75c-vx5nt,UID:9910b013-e84b-41b9-8567-bbd9ec1b7057,ResourceVersion:528848,Generation:0,CreationTimestamp:2020-03-18 14:05:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c f6260f5d-a7db-457b-a938-e25547df4ada 0xc001999207 0xc001999208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tlksp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tlksp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-tlksp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001999560} {node.kubernetes.io/unreachable Exists NoExecute 0xc001999580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:05:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:05:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:05:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:05:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.99,StartTime:2020-03-18 14:05:39 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-18 14:05:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://249e0251cf6aed1f34e90b2f5bc5f7763f3d7a64c512daf9cc52ffc6603b285a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:05:43.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2317" for this suite. Mar 18 14:05:49.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:05:49.899: INFO: namespace deployment-2317 deletion completed in 6.086254371s • [SLOW TEST:15.214 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:05:49.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 18 14:05:55.017: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:05:56.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3918" for this suite. Mar 18 14:06:18.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:06:18.154: INFO: namespace replicaset-3918 deletion completed in 22.118960586s • [SLOW TEST:28.254 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:06:18.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 14:06:18.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5291358-b7bd-471f-9090-4aded1565e60" in namespace "projected-4900" to be "success or failure" Mar 18 14:06:18.224: INFO: Pod "downwardapi-volume-c5291358-b7bd-471f-9090-4aded1565e60": Phase="Pending", Reason="", readiness=false. Elapsed: 3.604444ms Mar 18 14:06:20.228: INFO: Pod "downwardapi-volume-c5291358-b7bd-471f-9090-4aded1565e60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007649158s Mar 18 14:06:22.232: INFO: Pod "downwardapi-volume-c5291358-b7bd-471f-9090-4aded1565e60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011712009s STEP: Saw pod success Mar 18 14:06:22.232: INFO: Pod "downwardapi-volume-c5291358-b7bd-471f-9090-4aded1565e60" satisfied condition "success or failure" Mar 18 14:06:22.235: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c5291358-b7bd-471f-9090-4aded1565e60 container client-container: STEP: delete the pod Mar 18 14:06:22.255: INFO: Waiting for pod downwardapi-volume-c5291358-b7bd-471f-9090-4aded1565e60 to disappear Mar 18 14:06:22.276: INFO: Pod downwardapi-volume-c5291358-b7bd-471f-9090-4aded1565e60 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:06:22.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4900" for this suite. Mar 18 14:06:28.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:06:28.375: INFO: namespace projected-4900 deletion completed in 6.095410309s • [SLOW TEST:10.220 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:06:28.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-f35cee82-30e5-494d-b025-4a8cd1061036 STEP: Creating secret with name s-test-opt-upd-633af4f8-203c-4e84-8ace-00d94fe160c9 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f35cee82-30e5-494d-b025-4a8cd1061036 STEP: Updating secret s-test-opt-upd-633af4f8-203c-4e84-8ace-00d94fe160c9 STEP: Creating secret with name s-test-opt-create-8955f4f6-aeec-4f5f-b394-d22ac7b6d13c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:08:05.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9017" for this suite. Mar 18 14:08:27.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:08:27.189: INFO: namespace secrets-9017 deletion completed in 22.092553295s • [SLOW TEST:118.813 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:08:27.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Mar 18 14:08:27.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9404' Mar 18 14:08:29.500: INFO: stderr: "" Mar 18 14:08:29.500: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 14:08:29.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9404' Mar 18 14:08:29.622: INFO: stderr: "" Mar 18 14:08:29.622: INFO: stdout: "update-demo-nautilus-ls8tt update-demo-nautilus-rzwkw " Mar 18 14:08:29.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ls8tt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9404' Mar 18 14:08:29.709: INFO: stderr: "" Mar 18 14:08:29.709: INFO: stdout: "" Mar 18 14:08:29.709: INFO: update-demo-nautilus-ls8tt is created but not running Mar 18 14:08:34.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9404' Mar 18 14:08:34.881: INFO: stderr: "" Mar 18 14:08:34.881: INFO: stdout: "update-demo-nautilus-ls8tt update-demo-nautilus-rzwkw " Mar 18 14:08:34.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ls8tt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9404' Mar 18 14:08:34.965: INFO: stderr: "" Mar 18 14:08:34.965: INFO: stdout: "true" Mar 18 14:08:34.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ls8tt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9404' Mar 18 14:08:35.051: INFO: stderr: "" Mar 18 14:08:35.051: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 14:08:35.051: INFO: validating pod update-demo-nautilus-ls8tt Mar 18 14:08:35.055: INFO: got data: { "image": "nautilus.jpg" } Mar 18 14:08:35.055: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 14:08:35.055: INFO: update-demo-nautilus-ls8tt is verified up and running Mar 18 14:08:35.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rzwkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9404' Mar 18 14:08:35.152: INFO: stderr: "" Mar 18 14:08:35.152: INFO: stdout: "true" Mar 18 14:08:35.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rzwkw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9404' Mar 18 14:08:35.238: INFO: stderr: "" Mar 18 14:08:35.238: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 18 14:08:35.238: INFO: validating pod update-demo-nautilus-rzwkw Mar 18 14:08:35.242: INFO: got data: { "image": "nautilus.jpg" } Mar 18 14:08:35.242: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 18 14:08:35.242: INFO: update-demo-nautilus-rzwkw is verified up and running STEP: rolling-update to new replication controller Mar 18 14:08:35.245: INFO: scanned /root for discovery docs: Mar 18 14:08:35.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9404' Mar 18 14:08:57.719: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 18 14:08:57.719: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 18 14:08:57.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9404' Mar 18 14:08:57.819: INFO: stderr: "" Mar 18 14:08:57.820: INFO: stdout: "update-demo-kitten-bp2gn update-demo-kitten-tb2n8 " Mar 18 14:08:57.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bp2gn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9404' Mar 18 14:08:57.912: INFO: stderr: "" Mar 18 14:08:57.912: INFO: stdout: "true" Mar 18 14:08:57.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bp2gn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9404' Mar 18 14:08:58.005: INFO: stderr: "" Mar 18 14:08:58.005: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 18 14:08:58.005: INFO: validating pod update-demo-kitten-bp2gn Mar 18 14:08:58.009: INFO: got data: { "image": "kitten.jpg" } Mar 18 14:08:58.009: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 18 14:08:58.009: INFO: update-demo-kitten-bp2gn is verified up and running Mar 18 14:08:58.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tb2n8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9404' Mar 18 14:08:58.107: INFO: stderr: "" Mar 18 14:08:58.107: INFO: stdout: "true" Mar 18 14:08:58.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tb2n8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9404' Mar 18 14:08:58.211: INFO: stderr: "" Mar 18 14:08:58.211: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 18 14:08:58.211: INFO: validating pod update-demo-kitten-tb2n8 Mar 18 14:08:58.216: INFO: got data: { "image": "kitten.jpg" } Mar 18 14:08:58.216: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 18 14:08:58.216: INFO: update-demo-kitten-tb2n8 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:08:58.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9404" for this suite. Mar 18 14:09:20.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:09:20.321: INFO: namespace kubectl-9404 deletion completed in 22.101580876s • [SLOW TEST:53.132 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:09:20.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:09:45.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5991" for this suite. Mar 18 14:09:51.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:09:51.607: INFO: namespace namespaces-5991 deletion completed in 6.087920603s STEP: Destroying namespace "nsdeletetest-4704" for this suite. Mar 18 14:09:51.609: INFO: Namespace nsdeletetest-4704 was already deleted STEP: Destroying namespace "nsdeletetest-7570" for this suite. Mar 18 14:09:57.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:09:57.739: INFO: namespace nsdeletetest-7570 deletion completed in 6.130623202s • [SLOW TEST:37.417 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:09:57.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-8049 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8049 to expose endpoints map[] Mar 18 14:09:57.863: INFO: Get endpoints failed (8.295296ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 18 14:09:58.867: INFO: successfully validated that service endpoint-test2 in namespace services-8049 exposes endpoints map[] (1.012027519s elapsed) STEP: Creating pod pod1 in namespace services-8049 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8049 to expose endpoints map[pod1:[80]] Mar 18 14:10:01.923: INFO: successfully validated that service endpoint-test2 in namespace services-8049 exposes endpoints map[pod1:[80]] (3.048805958s elapsed) STEP: Creating pod pod2 in namespace services-8049 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8049 to expose endpoints map[pod1:[80] pod2:[80]] Mar 18 14:10:04.993: INFO: successfully validated that service endpoint-test2 in namespace services-8049 exposes endpoints map[pod1:[80] pod2:[80]] (3.065555369s elapsed) STEP: Deleting pod pod1 in namespace services-8049 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8049 to expose endpoints map[pod2:[80]] Mar 18 14:10:06.088: INFO: successfully validated that service endpoint-test2 in namespace services-8049 exposes endpoints map[pod2:[80]] (1.090344122s elapsed) STEP: Deleting pod pod2 in namespace services-8049 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8049 to expose endpoints map[] Mar 18 14:10:06.122: INFO: successfully validated that service endpoint-test2 in namespace services-8049 exposes endpoints map[] (12.094937ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:10:06.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8049" for this suite. Mar 18 14:10:28.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:10:28.264: INFO: namespace services-8049 deletion completed in 22.11342288s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:30.525 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:10:28.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 14:10:46.356: INFO: Container started at 2020-03-18 14:10:30 +0000 UTC, pod became ready at 2020-03-18 14:10:45 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:10:46.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7708" for this suite. Mar 18 14:11:08.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:11:08.483: INFO: namespace container-probe-7708 deletion completed in 22.122830354s • [SLOW TEST:40.219 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:11:08.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:11:12.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1988" for this suite. Mar 18 14:11:50.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:11:50.708: INFO: namespace kubelet-test-1988 deletion completed in 38.101010496s • [SLOW TEST:42.224 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:11:50.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 18 14:11:50.775: INFO: Waiting up to 5m0s for pod "pod-859e758a-cc4b-4953-a71a-408c7d912a51" in namespace "emptydir-8616" to be "success or failure" Mar 18 14:11:50.782: INFO: Pod "pod-859e758a-cc4b-4953-a71a-408c7d912a51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.280453ms Mar 18 14:11:52.787: INFO: Pod "pod-859e758a-cc4b-4953-a71a-408c7d912a51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011835258s Mar 18 14:11:54.810: INFO: Pod "pod-859e758a-cc4b-4953-a71a-408c7d912a51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034407025s STEP: Saw pod success Mar 18 14:11:54.810: INFO: Pod "pod-859e758a-cc4b-4953-a71a-408c7d912a51" satisfied condition "success or failure" Mar 18 14:11:54.823: INFO: Trying to get logs from node iruya-worker pod pod-859e758a-cc4b-4953-a71a-408c7d912a51 container test-container: STEP: delete the pod Mar 18 14:11:54.842: INFO: Waiting for pod pod-859e758a-cc4b-4953-a71a-408c7d912a51 to disappear Mar 18 14:11:54.847: INFO: Pod pod-859e758a-cc4b-4953-a71a-408c7d912a51 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:11:54.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8616" for this suite. Mar 18 14:12:00.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:12:00.978: INFO: namespace emptydir-8616 deletion completed in 6.111478289s • [SLOW TEST:10.270 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:12:00.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Mar 18 14:12:01.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1251' Mar 18 14:12:01.276: INFO: stderr: "" Mar 18 14:12:01.276: INFO: stdout: "pod/pause created\n" Mar 18 14:12:01.277: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 18 14:12:01.277: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1251" to be "running and ready" Mar 18 14:12:01.285: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.203973ms Mar 18 14:12:03.289: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012535424s Mar 18 14:12:05.294: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.0170127s Mar 18 14:12:05.294: INFO: Pod "pause" satisfied condition "running and ready" Mar 18 14:12:05.294: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Mar 18 14:12:05.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1251' Mar 18 14:12:05.402: INFO: stderr: "" Mar 18 14:12:05.402: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 18 14:12:05.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1251' Mar 18 14:12:05.494: INFO: stderr: "" Mar 18 14:12:05.494: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 18 14:12:05.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1251' Mar 18 14:12:05.591: INFO: stderr: "" Mar 18 14:12:05.591: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 18 14:12:05.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1251' Mar 18 14:12:05.678: INFO: stderr: "" Mar 18 14:12:05.678: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Mar 18 14:12:05.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1251' Mar 18 14:12:05.810: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 18 14:12:05.810: INFO: stdout: "pod \"pause\" force deleted\n" Mar 18 14:12:05.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1251' Mar 18 14:12:05.904: INFO: stderr: "No resources found.\n" Mar 18 14:12:05.904: INFO: stdout: "" Mar 18 14:12:05.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1251 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 18 14:12:05.998: INFO: stderr: "" Mar 18 14:12:05.998: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:12:05.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1251" for this suite. Mar 18 14:12:12.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:12:12.184: INFO: namespace kubectl-1251 deletion completed in 6.119775561s • [SLOW TEST:11.205 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:12:12.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-26a47122-2b38-46ac-b48f-38856ac93015 STEP: Creating secret with name s-test-opt-upd-c8dab099-5d41-4ffe-968e-4be0b8811440 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-26a47122-2b38-46ac-b48f-38856ac93015 STEP: Updating secret s-test-opt-upd-c8dab099-5d41-4ffe-968e-4be0b8811440 STEP: Creating secret with name s-test-opt-create-2d8fe416-d1b8-4b6e-a827-d9fcd396c3a6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:13:44.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6670" for this suite. Mar 18 14:14:06.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:14:06.908: INFO: namespace projected-6670 deletion completed in 22.100641258s • [SLOW TEST:114.723 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:14:06.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Mar 18 14:14:06.979: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1915" to be "success or failure" Mar 18 14:14:06.993: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.844408ms Mar 18 14:14:08.997: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018426014s Mar 18 14:14:11.001: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022564356s STEP: Saw pod success Mar 18 14:14:11.001: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 18 14:14:11.004: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 18 14:14:11.046: INFO: Waiting for pod pod-host-path-test to disappear Mar 18 14:14:11.071: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:14:11.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1915" for this suite. Mar 18 14:14:17.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:14:17.164: INFO: namespace hostpath-1915 deletion completed in 6.089240269s • [SLOW TEST:10.255 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:14:17.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-e3f7993c-5d14-4d1f-9811-3a7f9f9ababb STEP: Creating configMap with name cm-test-opt-upd-b1860262-1f1c-49c4-b81a-05b85bcf362e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-e3f7993c-5d14-4d1f-9811-3a7f9f9ababb STEP: Updating configmap cm-test-opt-upd-b1860262-1f1c-49c4-b81a-05b85bcf362e STEP: Creating configMap with name cm-test-opt-create-f1f2b076-83c0-46eb-998f-2020ad78dc51 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:15:45.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7392" for this suite. Mar 18 14:16:07.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:16:07.853: INFO: namespace projected-7392 deletion completed in 22.09258112s • [SLOW TEST:110.689 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:16:07.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:16:11.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-220" for this suite. Mar 18 14:16:17.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:16:18.060: INFO: namespace kubelet-test-220 deletion completed in 6.111533703s • [SLOW TEST:10.207 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:16:18.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Mar 18 14:16:18.118: INFO: Waiting up to 5m0s for pod "var-expansion-d987b476-5a1a-458c-be18-70191678cd37" in namespace "var-expansion-4661" to be "success or failure" Mar 18 14:16:18.122: INFO: Pod "var-expansion-d987b476-5a1a-458c-be18-70191678cd37": Phase="Pending", Reason="", readiness=false. Elapsed: 3.741269ms Mar 18 14:16:20.187: INFO: Pod "var-expansion-d987b476-5a1a-458c-be18-70191678cd37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069020082s Mar 18 14:16:22.191: INFO: Pod "var-expansion-d987b476-5a1a-458c-be18-70191678cd37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072435057s STEP: Saw pod success Mar 18 14:16:22.191: INFO: Pod "var-expansion-d987b476-5a1a-458c-be18-70191678cd37" satisfied condition "success or failure" Mar 18 14:16:22.193: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-d987b476-5a1a-458c-be18-70191678cd37 container dapi-container: STEP: delete the pod Mar 18 14:16:22.225: INFO: Waiting for pod var-expansion-d987b476-5a1a-458c-be18-70191678cd37 to disappear Mar 18 14:16:22.259: INFO: Pod var-expansion-d987b476-5a1a-458c-be18-70191678cd37 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:16:22.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4661" for this suite. Mar 18 14:16:28.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:16:28.353: INFO: namespace var-expansion-4661 deletion completed in 6.090707561s • [SLOW TEST:10.292 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:16:28.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 14:16:28.403: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 18 14:16:30.505: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:16:30.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2579" for this suite. Mar 18 14:16:36.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:16:36.664: INFO: namespace replication-controller-2579 deletion completed in 6.129795969s • [SLOW TEST:8.311 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:16:36.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 18 14:16:44.804: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 14:16:44.811: INFO: Pod pod-with-prestop-http-hook still exists Mar 18 14:16:46.811: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 14:16:46.816: INFO: Pod pod-with-prestop-http-hook still exists Mar 18 14:16:48.811: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 14:16:48.815: INFO: Pod pod-with-prestop-http-hook still exists Mar 18 14:16:50.811: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 14:16:50.815: INFO: Pod pod-with-prestop-http-hook still exists Mar 18 14:16:52.811: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 18 14:16:52.815: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:16:52.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9290" for this suite. Mar 18 14:17:14.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:17:14.912: INFO: namespace container-lifecycle-hook-9290 deletion completed in 22.085762358s • [SLOW TEST:38.248 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:17:14.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 14:17:14.963: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a492790a-c9d5-495d-b31d-3c759c379318" in namespace "downward-api-3352" to be "success or failure" Mar 18 14:17:14.978: INFO: Pod "downwardapi-volume-a492790a-c9d5-495d-b31d-3c759c379318": Phase="Pending", Reason="", readiness=false. Elapsed: 14.905323ms Mar 18 14:17:16.981: INFO: Pod "downwardapi-volume-a492790a-c9d5-495d-b31d-3c759c379318": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018168634s Mar 18 14:17:18.985: INFO: Pod "downwardapi-volume-a492790a-c9d5-495d-b31d-3c759c379318": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022361188s STEP: Saw pod success Mar 18 14:17:18.985: INFO: Pod "downwardapi-volume-a492790a-c9d5-495d-b31d-3c759c379318" satisfied condition "success or failure" Mar 18 14:17:18.988: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a492790a-c9d5-495d-b31d-3c759c379318 container client-container: STEP: delete the pod Mar 18 14:17:19.022: INFO: Waiting for pod downwardapi-volume-a492790a-c9d5-495d-b31d-3c759c379318 to disappear Mar 18 14:17:19.037: INFO: Pod downwardapi-volume-a492790a-c9d5-495d-b31d-3c759c379318 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:17:19.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3352" for this suite. Mar 18 14:17:25.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:17:25.130: INFO: namespace downward-api-3352 deletion completed in 6.090265452s • [SLOW TEST:10.218 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:17:25.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-7ks7 STEP: Creating a pod to test atomic-volume-subpath Mar 18 14:17:25.208: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7ks7" in namespace "subpath-2063" to be "success or failure" Mar 18 14:17:25.229: INFO: Pod "pod-subpath-test-configmap-7ks7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.681098ms Mar 18 14:17:27.233: INFO: Pod "pod-subpath-test-configmap-7ks7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025138406s Mar 18 14:17:29.236: INFO: Pod "pod-subpath-test-configmap-7ks7": Phase="Running", Reason="", readiness=true. Elapsed: 4.028650713s Mar 18 14:17:31.241: INFO: Pod "pod-subpath-test-configmap-7ks7": Phase="Running", Reason="", readiness=true. Elapsed: 6.033227524s Mar 18 14:17:33.246: INFO: Pod "pod-subpath-test-configmap-7ks7": Phase="Running", Reason="", readiness=true. Elapsed: 8.037848671s Mar 18 14:17:35.250: INFO: Pod "pod-subpath-test-configmap-7ks7": Phase="Running", Reason="", readiness=true. Elapsed: 10.042207267s Mar 18 14:17:37.253: INFO: Pod "pod-subpath-test-configmap-7ks7": Phase="Running", Reason="", readiness=true. Elapsed: 12.045696659s Mar 18 14:17:39.257: INFO: Pod "pod-subpath-test-configmap-7ks7": Phase="Running", Reason="", readiness=true. Elapsed: 14.049663566s Mar 18 14:17:41.261: INFO: Pod "pod-subpath-test-configmap-7ks7": Phase="Running", Reason="", readiness=true. Elapsed: 16.053518865s Mar 18 14:17:43.265: INFO: Pod "pod-subpath-test-configmap-7ks7": Phase="Running", Reason="", readiness=true. Elapsed: 18.056848158s Mar 18 14:17:45.269: INFO: Pod "pod-subpath-test-configmap-7ks7": Phase="Running", Reason="", readiness=true. Elapsed: 20.0614101s Mar 18 14:17:47.273: INFO: Pod "pod-subpath-test-configmap-7ks7": Phase="Running", Reason="", readiness=true. Elapsed: 22.065342398s Mar 18 14:17:49.277: INFO: Pod "pod-subpath-test-configmap-7ks7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.069381332s STEP: Saw pod success Mar 18 14:17:49.277: INFO: Pod "pod-subpath-test-configmap-7ks7" satisfied condition "success or failure" Mar 18 14:17:49.280: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-7ks7 container test-container-subpath-configmap-7ks7: STEP: delete the pod Mar 18 14:17:49.317: INFO: Waiting for pod pod-subpath-test-configmap-7ks7 to disappear Mar 18 14:17:49.321: INFO: Pod pod-subpath-test-configmap-7ks7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-7ks7 Mar 18 14:17:49.321: INFO: Deleting pod "pod-subpath-test-configmap-7ks7" in namespace "subpath-2063" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:17:49.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2063" for this suite. Mar 18 14:17:55.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:17:55.501: INFO: namespace subpath-2063 deletion completed in 6.175818391s • [SLOW TEST:30.370 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:17:55.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Mar 18 14:17:59.613: INFO: Pod pod-hostip-48203f3c-d61d-406e-b2a9-a62ad3f341e1 has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:17:59.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2807" for this suite. Mar 18 14:18:21.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:18:21.713: INFO: namespace pods-2807 deletion completed in 22.096473545s • [SLOW TEST:26.211 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:18:21.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 18 14:18:21.771: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:18:28.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6670" for this suite. Mar 18 14:18:34.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:18:34.994: INFO: namespace init-container-6670 deletion completed in 6.097380874s • [SLOW TEST:13.281 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:18:34.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Mar 18 14:18:35.580: INFO: created pod pod-service-account-defaultsa Mar 18 14:18:35.581: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 18 14:18:35.586: INFO: created pod pod-service-account-mountsa Mar 18 14:18:35.586: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 18 14:18:35.626: INFO: created pod pod-service-account-nomountsa Mar 18 14:18:35.626: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 18 14:18:35.634: INFO: created pod pod-service-account-defaultsa-mountspec Mar 18 14:18:35.634: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 18 14:18:35.666: INFO: created pod pod-service-account-mountsa-mountspec Mar 18 14:18:35.666: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 18 14:18:35.704: INFO: created pod pod-service-account-nomountsa-mountspec Mar 18 14:18:35.705: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 18 14:18:35.776: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 18 14:18:35.776: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 18 14:18:35.786: INFO: created pod pod-service-account-mountsa-nomountspec Mar 18 14:18:35.786: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 18 14:18:35.802: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 18 14:18:35.802: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:18:35.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2373" for this suite. Mar 18 14:19:01.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:19:02.021: INFO: namespace svcaccounts-2373 deletion completed in 26.19744008s • [SLOW TEST:27.026 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:19:02.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-e974495a-ae8b-4baa-b1fa-24184c90f0ed STEP: Creating a pod to test consume configMaps Mar 18 14:19:02.104: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9ae231f3-747d-499f-970e-122c00adcd88" in namespace "projected-5966" to be "success or failure" Mar 18 14:19:02.107: INFO: Pod "pod-projected-configmaps-9ae231f3-747d-499f-970e-122c00adcd88": Phase="Pending", Reason="", readiness=false. Elapsed: 3.314387ms Mar 18 14:19:04.112: INFO: Pod "pod-projected-configmaps-9ae231f3-747d-499f-970e-122c00adcd88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008034763s Mar 18 14:19:06.116: INFO: Pod "pod-projected-configmaps-9ae231f3-747d-499f-970e-122c00adcd88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012346017s STEP: Saw pod success Mar 18 14:19:06.116: INFO: Pod "pod-projected-configmaps-9ae231f3-747d-499f-970e-122c00adcd88" satisfied condition "success or failure" Mar 18 14:19:06.119: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-9ae231f3-747d-499f-970e-122c00adcd88 container projected-configmap-volume-test: STEP: delete the pod Mar 18 14:19:06.146: INFO: Waiting for pod pod-projected-configmaps-9ae231f3-747d-499f-970e-122c00adcd88 to disappear Mar 18 14:19:06.155: INFO: Pod pod-projected-configmaps-9ae231f3-747d-499f-970e-122c00adcd88 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:19:06.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5966" for this suite. Mar 18 14:19:12.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:19:12.292: INFO: namespace projected-5966 deletion completed in 6.134091139s • [SLOW TEST:10.271 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:19:12.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-1526 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1526 to expose endpoints map[] Mar 18 14:19:12.402: INFO: Get endpoints failed (15.952495ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 18 14:19:13.406: INFO: successfully validated that service multi-endpoint-test in namespace services-1526 exposes endpoints map[] (1.019945991s elapsed) STEP: Creating pod pod1 in namespace services-1526 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1526 to expose endpoints map[pod1:[100]] Mar 18 14:19:17.465: INFO: successfully validated that service multi-endpoint-test in namespace services-1526 exposes endpoints map[pod1:[100]] (4.052938809s elapsed) STEP: Creating pod pod2 in namespace services-1526 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1526 to expose endpoints map[pod1:[100] pod2:[101]] Mar 18 14:19:20.519: INFO: successfully validated that service multi-endpoint-test in namespace services-1526 exposes endpoints map[pod1:[100] pod2:[101]] (3.050209141s elapsed) STEP: Deleting pod pod1 in namespace services-1526 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1526 to expose endpoints map[pod2:[101]] Mar 18 14:19:21.647: INFO: successfully validated that service multi-endpoint-test in namespace services-1526 exposes endpoints map[pod2:[101]] (1.123055762s elapsed) STEP: Deleting pod pod2 in namespace services-1526 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1526 to expose endpoints map[] Mar 18 14:19:22.704: INFO: successfully validated that service multi-endpoint-test in namespace services-1526 exposes endpoints map[] (1.052154009s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:19:22.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1526" for this suite. Mar 18 14:19:44.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:19:44.886: INFO: namespace services-1526 deletion completed in 22.127964913s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.594 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:19:44.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 14:19:44.958: INFO: Creating deployment "nginx-deployment" Mar 18 14:19:44.961: INFO: Waiting for observed generation 1 Mar 18 14:19:47.238: INFO: Waiting for all required pods to come up Mar 18 14:19:47.278: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 18 14:19:55.301: INFO: Waiting for deployment "nginx-deployment" to complete Mar 18 14:19:55.307: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 18 14:19:55.314: INFO: Updating deployment nginx-deployment Mar 18 14:19:55.314: INFO: Waiting for observed generation 2 Mar 18 14:19:57.333: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 18 14:19:57.336: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 18 14:19:57.465: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 18 14:19:57.475: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 18 14:19:57.475: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 18 14:19:57.477: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 18 14:19:57.479: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 18 14:19:57.480: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 18 14:19:57.484: INFO: Updating deployment nginx-deployment Mar 18 14:19:57.484: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 18 14:19:57.492: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 18 14:19:57.527: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 18 14:19:57.726: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-2476,SelfLink:/apis/apps/v1/namespaces/deployment-2476/deployments/nginx-deployment,UID:93c811ab-319b-4c16-839b-21c587bb43ee,ResourceVersion:531742,Generation:3,CreationTimestamp:2020-03-18 14:19:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-03-18 14:19:56 +0000 UTC 2020-03-18 14:19:44 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-03-18 14:19:57 +0000 UTC 2020-03-18 14:19:57 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 18 14:19:57.813: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-2476,SelfLink:/apis/apps/v1/namespaces/deployment-2476/replicasets/nginx-deployment-55fb7cb77f,UID:1cb8de06-eaf0-4291-b74a-8d80fa3d8f1f,ResourceVersion:531777,Generation:3,CreationTimestamp:2020-03-18 14:19:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 93c811ab-319b-4c16-839b-21c587bb43ee 0xc0031e8a27 0xc0031e8a28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 18 14:19:57.813: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 18 14:19:57.813: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-2476,SelfLink:/apis/apps/v1/namespaces/deployment-2476/replicasets/nginx-deployment-7b8c6f4498,UID:8b2e2ab4-7385-4c7a-9121-03ead265db58,ResourceVersion:531776,Generation:3,CreationTimestamp:2020-03-18 14:19:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 93c811ab-319b-4c16-839b-21c587bb43ee 0xc0031e8af7 0xc0031e8af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 18 14:19:57.858: INFO: Pod "nginx-deployment-55fb7cb77f-2lrbs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2lrbs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-55fb7cb77f-2lrbs,UID:606673a7-0abf-4e30-b497-2223bd2c91fa,ResourceVersion:531691,Generation:0,CreationTimestamp:2020-03-18 14:19:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1cb8de06-eaf0-4291-b74a-8d80fa3d8f1f 0xc0031e9457 0xc0031e9458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031e94d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031e94f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-18 14:19:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.858: INFO: Pod "nginx-deployment-55fb7cb77f-5k5gz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5k5gz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-55fb7cb77f-5k5gz,UID:39472204-5a0b-4758-99f1-19978e9233c6,ResourceVersion:531765,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1cb8de06-eaf0-4291-b74a-8d80fa3d8f1f 0xc0031e95c0 0xc0031e95c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031e9640} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031e9660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.858: INFO: Pod "nginx-deployment-55fb7cb77f-68qxt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-68qxt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-55fb7cb77f-68qxt,UID:6373c736-d46f-490c-aea8-c40a6dddfaac,ResourceVersion:531751,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1cb8de06-eaf0-4291-b74a-8d80fa3d8f1f 0xc0031e96e0 0xc0031e96e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031e9760} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031e9780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.858: INFO: Pod "nginx-deployment-55fb7cb77f-6jm88" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6jm88,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-55fb7cb77f-6jm88,UID:c8c24fdf-1c66-46ab-bbf0-7fffc455df2b,ResourceVersion:531763,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1cb8de06-eaf0-4291-b74a-8d80fa3d8f1f 0xc0031e9800 0xc0031e9801}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031e9880} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031e98a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.858: INFO: Pod "nginx-deployment-55fb7cb77f-fgj8v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fgj8v,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-55fb7cb77f-fgj8v,UID:3f2d316e-0166-4f6f-a2b4-1bfc34ae8638,ResourceVersion:531710,Generation:0,CreationTimestamp:2020-03-18 14:19:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1cb8de06-eaf0-4291-b74a-8d80fa3d8f1f 0xc0031e9920 0xc0031e9921}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031e99a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031e99c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-18 14:19:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.858: INFO: Pod "nginx-deployment-55fb7cb77f-hgfpx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hgfpx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-55fb7cb77f-hgfpx,UID:397625e7-5748-463d-a91f-cf926a66f825,ResourceVersion:531708,Generation:0,CreationTimestamp:2020-03-18 14:19:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1cb8de06-eaf0-4291-b74a-8d80fa3d8f1f 0xc0031e9a90 0xc0031e9a91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031e9b10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031e9b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-18 14:19:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.858: INFO: Pod "nginx-deployment-55fb7cb77f-krcgr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-krcgr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-55fb7cb77f-krcgr,UID:c57b67fa-10cc-4922-a631-fb95e9db07a4,ResourceVersion:531768,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1cb8de06-eaf0-4291-b74a-8d80fa3d8f1f 0xc0031e9c00 0xc0031e9c01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031e9c80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031e9ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.858: INFO: Pod "nginx-deployment-55fb7cb77f-qnvmc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qnvmc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-55fb7cb77f-qnvmc,UID:4c023036-a5dd-4e2b-9cbc-4d5bfb0b10db,ResourceVersion:531711,Generation:0,CreationTimestamp:2020-03-18 14:19:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1cb8de06-eaf0-4291-b74a-8d80fa3d8f1f 0xc0031e9d20 0xc0031e9d21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031e9da0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031e9dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-18 14:19:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.859: INFO: Pod "nginx-deployment-55fb7cb77f-s7ffj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-s7ffj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-55fb7cb77f-s7ffj,UID:80dc290c-e2c8-4ff5-9deb-63621f04528c,ResourceVersion:531764,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1cb8de06-eaf0-4291-b74a-8d80fa3d8f1f 0xc0031e9e90 0xc0031e9e91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031e9f10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031e9f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.859: INFO: Pod "nginx-deployment-55fb7cb77f-s8n99" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-s8n99,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-55fb7cb77f-s8n99,UID:7e926685-b769-4c88-a271-b2081de7a011,ResourceVersion:531748,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1cb8de06-eaf0-4291-b74a-8d80fa3d8f1f 0xc0031e9fb0 0xc0031e9fb1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfa030} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfa050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.859: INFO: Pod "nginx-deployment-55fb7cb77f-t52bn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t52bn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-55fb7cb77f-t52bn,UID:1bc217a0-8ad0-416c-9f7e-4ff3a484974c,ResourceVersion:531775,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1cb8de06-eaf0-4291-b74a-8d80fa3d8f1f 0xc002cfa0d0 0xc002cfa0d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfa160} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfa180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.859: INFO: Pod "nginx-deployment-55fb7cb77f-v2fxl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v2fxl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-55fb7cb77f-v2fxl,UID:7f1b54d6-573b-4f82-b908-7e2f6b49f400,ResourceVersion:531696,Generation:0,CreationTimestamp:2020-03-18 14:19:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1cb8de06-eaf0-4291-b74a-8d80fa3d8f1f 0xc002cfa200 0xc002cfa201}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfa290} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfa2b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-18 14:19:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.859: INFO: Pod "nginx-deployment-55fb7cb77f-wcf8j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wcf8j,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-55fb7cb77f-wcf8j,UID:153dc232-926b-4b58-ad9b-d6f8fa3aedf2,ResourceVersion:531738,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1cb8de06-eaf0-4291-b74a-8d80fa3d8f1f 0xc002cfa380 0xc002cfa381}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfa400} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfa420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.859: INFO: Pod "nginx-deployment-7b8c6f4498-4svt2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4svt2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-4svt2,UID:dc02b435-984f-442e-99dc-3f73723fdb32,ResourceVersion:531623,Generation:0,CreationTimestamp:2020-03-18 14:19:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfa4a0 0xc002cfa4a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfa510} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfa530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.98,StartTime:2020-03-18 14:19:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 14:19:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f1e591415e96a15238ecc8212d25e860a9a964d15f817308ac25433859a033a0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.859: INFO: Pod "nginx-deployment-7b8c6f4498-556bg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-556bg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-556bg,UID:c0c4635f-c909-4a71-a4d1-b0105c55d742,ResourceVersion:531647,Generation:0,CreationTimestamp:2020-03-18 14:19:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfa600 0xc002cfa601}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfa680} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfa6a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.101,StartTime:2020-03-18 14:19:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 14:19:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://61f0456c8370db92fee86b3ffdea97c3ccc69596fef1fc7c7b150cd0d98520af}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.860: INFO: Pod "nginx-deployment-7b8c6f4498-676cj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-676cj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-676cj,UID:58845cac-180f-443f-b79d-a749e041c49a,ResourceVersion:531783,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfa780 0xc002cfa781}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfa7f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfa810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-18 14:19:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.860: INFO: Pod "nginx-deployment-7b8c6f4498-6bcvn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6bcvn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-6bcvn,UID:7cc2794b-ebe4-40b7-8e0c-5c767dc65904,ResourceVersion:531770,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfa9c0 0xc002cfa9c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfaa30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfaa50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.860: INFO: Pod "nginx-deployment-7b8c6f4498-88bsl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-88bsl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-88bsl,UID:0498cec9-afb6-4a5a-89e4-dd7e7a200bdf,ResourceVersion:531629,Generation:0,CreationTimestamp:2020-03-18 14:19:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfaad0 0xc002cfaad1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfab40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfab60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.99,StartTime:2020-03-18 14:19:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 14:19:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1db78bfbeb64d8e73bfaa45d38e4009ce9e56fd89baf11b4bc44add5a856c9c3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.860: INFO: Pod "nginx-deployment-7b8c6f4498-8lq2g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8lq2g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-8lq2g,UID:8c78dd3a-b70f-44a3-a6c9-195857dc54bb,ResourceVersion:531773,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfac30 0xc002cfac31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfaca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfacc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.860: INFO: Pod "nginx-deployment-7b8c6f4498-d8mg2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d8mg2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-d8mg2,UID:815510e6-7497-41b5-91a0-976bb34d9604,ResourceVersion:531778,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfad40 0xc002cfad41}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfadb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfadd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-18 14:19:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.860: INFO: Pod "nginx-deployment-7b8c6f4498-fkp62" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fkp62,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-fkp62,UID:669add56-7fd4-4960-80e3-c5530fcaa95d,ResourceVersion:531638,Generation:0,CreationTimestamp:2020-03-18 14:19:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfae90 0xc002cfae91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfaf20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfaf40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.121,StartTime:2020-03-18 14:19:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 14:19:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cae44bc9791ab2894f855834e133f9a2bd61b19e538dec37988e754bbdd5db41}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.860: INFO: Pod "nginx-deployment-7b8c6f4498-h5z8v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h5z8v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-h5z8v,UID:c4a9977b-9803-4fd7-81e3-dcb3e5238b70,ResourceVersion:531754,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfb010 0xc002cfb011}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfb0a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfb0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.860: INFO: Pod "nginx-deployment-7b8c6f4498-h8ltw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h8ltw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-h8ltw,UID:d5ac84b6-9ab4-4fb9-ad08-47f97e924f5c,ResourceVersion:531771,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfb140 0xc002cfb141}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfb1b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfb1d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.860: INFO: Pod "nginx-deployment-7b8c6f4498-jmcgr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jmcgr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-jmcgr,UID:76c95e88-99b2-4c54-b910-52379a328043,ResourceVersion:531642,Generation:0,CreationTimestamp:2020-03-18 14:19:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfb250 0xc002cfb251}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfb2d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfb2f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.122,StartTime:2020-03-18 14:19:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 14:19:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ca15670befd33eae3191703ec9c069f6c951651923a085d704ec19ebedd15c93}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.861: INFO: Pod "nginx-deployment-7b8c6f4498-lctzq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lctzq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-lctzq,UID:031e5e95-8643-4bd2-ab3c-9f659299cd2c,ResourceVersion:531650,Generation:0,CreationTimestamp:2020-03-18 14:19:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfb3c0 0xc002cfb3c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfb430} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfb450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.100,StartTime:2020-03-18 14:19:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 14:19:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9215c1384d19a167a431f431703615b04c0d04f900dc2cbbdfd4165a8a53f1d1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.861: INFO: Pod "nginx-deployment-7b8c6f4498-lg6rg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lg6rg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-lg6rg,UID:b97621f1-b295-48f4-98f4-a63a70a10b68,ResourceVersion:531615,Generation:0,CreationTimestamp:2020-03-18 14:19:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfb520 0xc002cfb521}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfb590} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfb5b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.97,StartTime:2020-03-18 14:19:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 14:19:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://57988c7a347e08d6ac5baf83ce79c304289c113430a728b1dccd05f3237fd6a3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.861: INFO: Pod "nginx-deployment-7b8c6f4498-lzgl9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lzgl9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-lzgl9,UID:f1581147-49d0-4dea-ba88-b1f64109bacf,ResourceVersion:531761,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfb690 0xc002cfb691}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfb700} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfb720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.861: INFO: Pod "nginx-deployment-7b8c6f4498-qjnxg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qjnxg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-qjnxg,UID:2ae8e2d0-fc3a-410f-9523-6bdc5d860eed,ResourceVersion:531755,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfb7c0 0xc002cfb7c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfb830} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfb850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.861: INFO: Pod "nginx-deployment-7b8c6f4498-qwb7j" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qwb7j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-qwb7j,UID:74d8212a-be9a-4343-9d2d-db075fec1008,ResourceVersion:531598,Generation:0,CreationTimestamp:2020-03-18 14:19:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfb8d0 0xc002cfb8d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfb940} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfb960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.120,StartTime:2020-03-18 14:19:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-18 14:19:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d4d1cad2747686b18b952a4571ccb79583dc46e20c1bf379531130d0879f15fe}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.861: INFO: Pod "nginx-deployment-7b8c6f4498-tbc48" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tbc48,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-tbc48,UID:6538541f-cd70-40ca-bce3-44c788d6369d,ResourceVersion:531772,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfba30 0xc002cfba31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfbaa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfbac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.861: INFO: Pod "nginx-deployment-7b8c6f4498-tc5vm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tc5vm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-tc5vm,UID:d1d3739d-322d-4917-8249-2cf5719035a3,ResourceVersion:531769,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfbb40 0xc002cfbb41}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfbbd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfbbf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.862: INFO: Pod "nginx-deployment-7b8c6f4498-wvphf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wvphf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-wvphf,UID:36c92da2-d38e-4e5a-8d1e-4e6e93f4d426,ResourceVersion:531737,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfbc70 0xc002cfbc71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfbce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfbd00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 18 14:19:57.862: INFO: Pod "nginx-deployment-7b8c6f4498-xlsd7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xlsd7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2476,SelfLink:/api/v1/namespaces/deployment-2476/pods/nginx-deployment-7b8c6f4498-xlsd7,UID:36190e2f-5fdc-4e03-a3f6-dbb36e7a5582,ResourceVersion:531753,Generation:0,CreationTimestamp:2020-03-18 14:19:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b2e2ab4-7385-4c7a-9121-03ead265db58 0xc002cfbd80 0xc002cfbd81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqdlm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqdlm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kqdlm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cfbdf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cfbe10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:19:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:19:57.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2476" for this suite. Mar 18 14:20:18.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:20:18.272: INFO: namespace deployment-2476 deletion completed in 20.337861957s • [SLOW TEST:33.385 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:20:18.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 18 14:20:18.407: INFO: Waiting up to 5m0s for pod "pod-0146b654-1ba6-47d1-8149-7528a9cf4956" in namespace "emptydir-9770" to be "success or failure" Mar 18 14:20:18.410: INFO: Pod "pod-0146b654-1ba6-47d1-8149-7528a9cf4956": Phase="Pending", Reason="", readiness=false. Elapsed: 3.451118ms Mar 18 14:20:20.414: INFO: Pod "pod-0146b654-1ba6-47d1-8149-7528a9cf4956": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007486823s Mar 18 14:20:22.418: INFO: Pod "pod-0146b654-1ba6-47d1-8149-7528a9cf4956": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011425644s STEP: Saw pod success Mar 18 14:20:22.418: INFO: Pod "pod-0146b654-1ba6-47d1-8149-7528a9cf4956" satisfied condition "success or failure" Mar 18 14:20:22.421: INFO: Trying to get logs from node iruya-worker pod pod-0146b654-1ba6-47d1-8149-7528a9cf4956 container test-container: STEP: delete the pod Mar 18 14:20:22.491: INFO: Waiting for pod pod-0146b654-1ba6-47d1-8149-7528a9cf4956 to disappear Mar 18 14:20:22.494: INFO: Pod pod-0146b654-1ba6-47d1-8149-7528a9cf4956 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:20:22.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9770" for this suite. Mar 18 14:20:28.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:20:28.586: INFO: namespace emptydir-9770 deletion completed in 6.088354511s • [SLOW TEST:10.315 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:20:28.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 14:20:28.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3285' Mar 18 14:20:31.731: INFO: stderr: "" Mar 18 14:20:31.731: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 18 14:20:31.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3285' Mar 18 14:20:32.016: INFO: stderr: "" Mar 18 14:20:32.016: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 18 14:20:33.035: INFO: Selector matched 1 pods for map[app:redis] Mar 18 14:20:33.035: INFO: Found 0 / 1 Mar 18 14:20:34.020: INFO: Selector matched 1 pods for map[app:redis] Mar 18 14:20:34.020: INFO: Found 0 / 1 Mar 18 14:20:35.020: INFO: Selector matched 1 pods for map[app:redis] Mar 18 14:20:35.021: INFO: Found 1 / 1 Mar 18 14:20:35.021: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 18 14:20:35.024: INFO: Selector matched 1 pods for map[app:redis] Mar 18 14:20:35.024: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 18 14:20:35.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-8vm2t --namespace=kubectl-3285' Mar 18 14:20:35.127: INFO: stderr: "" Mar 18 14:20:35.127: INFO: stdout: "Name: redis-master-8vm2t\nNamespace: kubectl-3285\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Wed, 18 Mar 2020 14:20:31 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.113\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://adfbec6a73a664756b260c1cf781fb6e14004193f05bb50af84bdc9a6fe26e24\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 18 Mar 2020 14:20:34 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbbfr (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-tbbfr:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-tbbfr\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-3285/redis-master-8vm2t to iruya-worker2\n Normal Pulled 3s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" Mar 18 14:20:35.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-3285' Mar 18 14:20:35.238: INFO: stderr: "" Mar 18 14:20:35.238: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3285\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-8vm2t\n" Mar 18 14:20:35.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-3285' Mar 18 14:20:35.337: INFO: stderr: "" Mar 18 14:20:35.337: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3285\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.98.7.234\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.113:6379\nSession Affinity: None\nEvents: \n" Mar 18 14:20:35.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Mar 18 14:20:35.455: INFO: stderr: "" Mar 18 14:20:35.455: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 18 Mar 2020 14:20:33 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 18 Mar 2020 14:20:33 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 18 Mar 2020 14:20:33 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 18 Mar 2020 14:20:33 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d19h\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 2d19h\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 2d19h\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 2d19h\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d19h\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 2d19h\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d19h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 18 14:20:35.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3285' Mar 18 14:20:35.553: INFO: stderr: "" Mar 18 14:20:35.553: INFO: stdout: "Name: kubectl-3285\nLabels: e2e-framework=kubectl\n e2e-run=45fa171b-fe9c-4d42-91a3-8e02975baf31\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:20:35.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3285" for this suite. Mar 18 14:20:57.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:20:57.642: INFO: namespace kubectl-3285 deletion completed in 22.086464449s • [SLOW TEST:29.055 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:20:57.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-126544c0-b5bf-40e2-9852-20414050428c STEP: Creating a pod to test consume configMaps Mar 18 14:20:57.760: INFO: Waiting up to 5m0s for pod "pod-configmaps-a79518d3-6bdb-425e-a40f-3a9b070401ec" in namespace "configmap-8042" to be "success or failure" Mar 18 14:20:57.764: INFO: Pod "pod-configmaps-a79518d3-6bdb-425e-a40f-3a9b070401ec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.550979ms Mar 18 14:20:59.768: INFO: Pod "pod-configmaps-a79518d3-6bdb-425e-a40f-3a9b070401ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007636059s Mar 18 14:21:01.772: INFO: Pod "pod-configmaps-a79518d3-6bdb-425e-a40f-3a9b070401ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012121403s STEP: Saw pod success Mar 18 14:21:01.772: INFO: Pod "pod-configmaps-a79518d3-6bdb-425e-a40f-3a9b070401ec" satisfied condition "success or failure" Mar 18 14:21:01.775: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a79518d3-6bdb-425e-a40f-3a9b070401ec container configmap-volume-test: STEP: delete the pod Mar 18 14:21:01.801: INFO: Waiting for pod pod-configmaps-a79518d3-6bdb-425e-a40f-3a9b070401ec to disappear Mar 18 14:21:01.819: INFO: Pod pod-configmaps-a79518d3-6bdb-425e-a40f-3a9b070401ec no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:21:01.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8042" for this suite. Mar 18 14:21:07.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:21:07.918: INFO: namespace configmap-8042 deletion completed in 6.096239857s • [SLOW TEST:10.275 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:21:07.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:21:12.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2741" for this suite. Mar 18 14:21:18.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:21:18.272: INFO: namespace emptydir-wrapper-2741 deletion completed in 6.139836584s • [SLOW TEST:10.353 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:21:18.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-adfd0bd7-cd86-48d8-972f-6d2c3328a656 STEP: Creating a pod to test consume configMaps Mar 18 14:21:18.352: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-78201980-938a-47f6-8622-b748a9ebf03c" in namespace "projected-9929" to be "success or failure" Mar 18 14:21:18.366: INFO: Pod "pod-projected-configmaps-78201980-938a-47f6-8622-b748a9ebf03c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.340707ms Mar 18 14:21:20.370: INFO: Pod "pod-projected-configmaps-78201980-938a-47f6-8622-b748a9ebf03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017168933s Mar 18 14:21:22.377: INFO: Pod "pod-projected-configmaps-78201980-938a-47f6-8622-b748a9ebf03c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02457696s STEP: Saw pod success Mar 18 14:21:22.377: INFO: Pod "pod-projected-configmaps-78201980-938a-47f6-8622-b748a9ebf03c" satisfied condition "success or failure" Mar 18 14:21:22.380: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-78201980-938a-47f6-8622-b748a9ebf03c container projected-configmap-volume-test: STEP: delete the pod Mar 18 14:21:22.397: INFO: Waiting for pod pod-projected-configmaps-78201980-938a-47f6-8622-b748a9ebf03c to disappear Mar 18 14:21:22.401: INFO: Pod pod-projected-configmaps-78201980-938a-47f6-8622-b748a9ebf03c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:21:22.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9929" for this suite. Mar 18 14:21:28.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:21:28.487: INFO: namespace projected-9929 deletion completed in 6.083035496s • [SLOW TEST:10.215 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:21:28.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 18 14:21:28.558: INFO: Waiting up to 5m0s for pod "pod-24146c55-db3b-4e6f-854b-bd79e15346ba" in namespace "emptydir-7954" to be "success or failure" Mar 18 14:21:28.561: INFO: Pod "pod-24146c55-db3b-4e6f-854b-bd79e15346ba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.593876ms Mar 18 14:21:30.565: INFO: Pod "pod-24146c55-db3b-4e6f-854b-bd79e15346ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007321465s Mar 18 14:21:32.569: INFO: Pod "pod-24146c55-db3b-4e6f-854b-bd79e15346ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011282071s STEP: Saw pod success Mar 18 14:21:32.569: INFO: Pod "pod-24146c55-db3b-4e6f-854b-bd79e15346ba" satisfied condition "success or failure" Mar 18 14:21:32.572: INFO: Trying to get logs from node iruya-worker pod pod-24146c55-db3b-4e6f-854b-bd79e15346ba container test-container: STEP: delete the pod Mar 18 14:21:32.605: INFO: Waiting for pod pod-24146c55-db3b-4e6f-854b-bd79e15346ba to disappear Mar 18 14:21:32.621: INFO: Pod pod-24146c55-db3b-4e6f-854b-bd79e15346ba no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:21:32.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7954" for this suite. Mar 18 14:21:38.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:21:38.717: INFO: namespace emptydir-7954 deletion completed in 6.091781737s • [SLOW TEST:10.229 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:21:38.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Mar 18 14:21:38.774: INFO: Waiting up to 5m0s for pod "var-expansion-c77c7bae-dc45-43f4-aa25-fa32cf86d03b" in namespace "var-expansion-8434" to be "success or failure" Mar 18 14:21:38.777: INFO: Pod "var-expansion-c77c7bae-dc45-43f4-aa25-fa32cf86d03b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.902332ms Mar 18 14:21:40.781: INFO: Pod "var-expansion-c77c7bae-dc45-43f4-aa25-fa32cf86d03b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00705051s Mar 18 14:21:42.784: INFO: Pod "var-expansion-c77c7bae-dc45-43f4-aa25-fa32cf86d03b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010764669s STEP: Saw pod success Mar 18 14:21:42.784: INFO: Pod "var-expansion-c77c7bae-dc45-43f4-aa25-fa32cf86d03b" satisfied condition "success or failure" Mar 18 14:21:42.787: INFO: Trying to get logs from node iruya-worker pod var-expansion-c77c7bae-dc45-43f4-aa25-fa32cf86d03b container dapi-container: STEP: delete the pod Mar 18 14:21:42.802: INFO: Waiting for pod var-expansion-c77c7bae-dc45-43f4-aa25-fa32cf86d03b to disappear Mar 18 14:21:42.807: INFO: Pod var-expansion-c77c7bae-dc45-43f4-aa25-fa32cf86d03b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:21:42.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8434" for this suite. Mar 18 14:21:48.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:21:48.922: INFO: namespace var-expansion-8434 deletion completed in 6.111685206s • [SLOW TEST:10.205 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:21:48.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5107 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 18 14:21:49.001: INFO: Found 0 stateful pods, waiting for 3 Mar 18 14:21:59.006: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 14:21:59.006: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 14:21:59.006: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 18 14:21:59.032: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 18 14:22:09.072: INFO: Updating stateful set ss2 Mar 18 14:22:09.143: INFO: Waiting for Pod statefulset-5107/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 18 14:22:19.149: INFO: Waiting for Pod statefulset-5107/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 18 14:22:29.487: INFO: Found 2 stateful pods, waiting for 3 Mar 18 14:22:39.498: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 14:22:39.498: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 14:22:39.498: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 18 14:22:39.520: INFO: Updating stateful set ss2 Mar 18 14:22:39.553: INFO: Waiting for Pod statefulset-5107/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 18 14:22:49.579: INFO: Updating stateful set ss2 Mar 18 14:22:49.587: INFO: Waiting for StatefulSet statefulset-5107/ss2 to complete update Mar 18 14:22:49.587: INFO: Waiting for Pod statefulset-5107/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 18 14:22:59.596: INFO: Deleting all statefulset in ns statefulset-5107 Mar 18 14:22:59.599: INFO: Scaling statefulset ss2 to 0 Mar 18 14:23:19.617: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 14:23:19.620: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:23:19.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5107" for this suite. Mar 18 14:23:25.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:23:25.749: INFO: namespace statefulset-5107 deletion completed in 6.089554078s • [SLOW TEST:96.827 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:23:25.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 18 14:23:25.795: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 18 14:23:25.815: INFO: Waiting for terminating namespaces to be deleted... Mar 18 14:23:25.817: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 18 14:23:25.824: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 18 14:23:25.824: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 14:23:25.825: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Mar 18 14:23:25.825: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 14:23:25.825: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 18 14:23:25.832: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Mar 18 14:23:25.832: INFO: Container kube-proxy ready: true, restart count 0 Mar 18 14:23:25.832: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Mar 18 14:23:25.832: INFO: Container kindnet-cni ready: true, restart count 0 Mar 18 14:23:25.832: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Mar 18 14:23:25.833: INFO: Container coredns ready: true, restart count 0 Mar 18 14:23:25.833: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Mar 18 14:23:25.833: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Mar 18 14:23:25.943: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Mar 18 14:23:25.943: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Mar 18 14:23:25.943: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Mar 18 14:23:25.943: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Mar 18 14:23:25.943: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Mar 18 14:23:25.943: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3f29707b-e5d1-4517-a1e5-7548fc3527b7.15fd6c037279de0a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8099/filler-pod-3f29707b-e5d1-4517-a1e5-7548fc3527b7 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-3f29707b-e5d1-4517-a1e5-7548fc3527b7.15fd6c03b9ec2c97], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3f29707b-e5d1-4517-a1e5-7548fc3527b7.15fd6c03f238a291], Reason = [Created], Message = [Created container filler-pod-3f29707b-e5d1-4517-a1e5-7548fc3527b7] STEP: Considering event: Type = [Normal], Name = [filler-pod-3f29707b-e5d1-4517-a1e5-7548fc3527b7.15fd6c04042ccd0b], Reason = [Started], Message = [Started container filler-pod-3f29707b-e5d1-4517-a1e5-7548fc3527b7] STEP: Considering event: Type = [Normal], Name = [filler-pod-44133322-d9f3-4013-9679-8e2d8a9060c0.15fd6c03721e28e3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8099/filler-pod-44133322-d9f3-4013-9679-8e2d8a9060c0 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-44133322-d9f3-4013-9679-8e2d8a9060c0.15fd6c03f1f74e45], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-44133322-d9f3-4013-9679-8e2d8a9060c0.15fd6c0414a2fa5a], Reason = [Created], Message = [Created container filler-pod-44133322-d9f3-4013-9679-8e2d8a9060c0] STEP: Considering event: Type = [Normal], Name = [filler-pod-44133322-d9f3-4013-9679-8e2d8a9060c0.15fd6c0422d03f36], Reason = [Started], Message = [Started container filler-pod-44133322-d9f3-4013-9679-8e2d8a9060c0] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fd6c0461e6eb8c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:23:31.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8099" for this suite. Mar 18 14:23:37.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:23:37.225: INFO: namespace sched-pred-8099 deletion completed in 6.173657837s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.476 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:23:37.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9878 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 18 14:23:37.266: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 18 14:23:57.395: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.149 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9878 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 14:23:57.395: INFO: >>> kubeConfig: /root/.kube/config I0318 14:23:57.431890 6 log.go:172] (0xc001862630) (0xc001b921e0) Create stream I0318 14:23:57.431922 6 log.go:172] (0xc001862630) (0xc001b921e0) Stream added, broadcasting: 1 I0318 14:23:57.434207 6 log.go:172] (0xc001862630) Reply frame received for 1 I0318 14:23:57.434264 6 log.go:172] (0xc001862630) (0xc0032df900) Create stream I0318 14:23:57.434278 6 log.go:172] (0xc001862630) (0xc0032df900) Stream added, broadcasting: 3 I0318 14:23:57.435179 6 log.go:172] (0xc001862630) Reply frame received for 3 I0318 14:23:57.435215 6 log.go:172] (0xc001862630) (0xc002bbfc20) Create stream I0318 14:23:57.435224 6 log.go:172] (0xc001862630) (0xc002bbfc20) Stream added, broadcasting: 5 I0318 14:23:57.435903 6 log.go:172] (0xc001862630) Reply frame received for 5 I0318 14:23:58.483665 6 log.go:172] (0xc001862630) Data frame received for 3 I0318 14:23:58.483776 6 log.go:172] (0xc0032df900) (3) Data frame handling I0318 14:23:58.483815 6 log.go:172] (0xc0032df900) (3) Data frame sent I0318 14:23:58.483839 6 log.go:172] (0xc001862630) Data frame received for 3 I0318 14:23:58.483858 6 log.go:172] (0xc0032df900) (3) Data frame handling I0318 14:23:58.483877 6 log.go:172] (0xc001862630) Data frame received for 5 I0318 14:23:58.483890 6 log.go:172] (0xc002bbfc20) (5) Data frame handling I0318 14:23:58.486221 6 log.go:172] (0xc001862630) Data frame received for 1 I0318 14:23:58.486251 6 log.go:172] (0xc001b921e0) (1) Data frame handling I0318 14:23:58.486271 6 log.go:172] (0xc001b921e0) (1) Data frame sent I0318 14:23:58.486289 6 log.go:172] (0xc001862630) (0xc001b921e0) Stream removed, broadcasting: 1 I0318 14:23:58.486399 6 log.go:172] (0xc001862630) (0xc001b921e0) Stream removed, broadcasting: 1 I0318 14:23:58.486415 6 log.go:172] (0xc001862630) (0xc0032df900) Stream removed, broadcasting: 3 I0318 14:23:58.486486 6 log.go:172] (0xc001862630) Go away received I0318 14:23:58.486558 6 log.go:172] (0xc001862630) (0xc002bbfc20) Stream removed, broadcasting: 5 Mar 18 14:23:58.486: INFO: Found all expected endpoints: [netserver-0] Mar 18 14:23:58.489: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.120 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9878 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 18 14:23:58.489: INFO: >>> kubeConfig: /root/.kube/config I0318 14:23:58.530550 6 log.go:172] (0xc002de3550) (0xc0026a80a0) Create stream I0318 14:23:58.530571 6 log.go:172] (0xc002de3550) (0xc0026a80a0) Stream added, broadcasting: 1 I0318 14:23:58.532750 6 log.go:172] (0xc002de3550) Reply frame received for 1 I0318 14:23:58.532790 6 log.go:172] (0xc002de3550) (0xc002f159a0) Create stream I0318 14:23:58.532803 6 log.go:172] (0xc002de3550) (0xc002f159a0) Stream added, broadcasting: 3 I0318 14:23:58.534169 6 log.go:172] (0xc002de3550) Reply frame received for 3 I0318 14:23:58.534222 6 log.go:172] (0xc002de3550) (0xc002f15a40) Create stream I0318 14:23:58.534235 6 log.go:172] (0xc002de3550) (0xc002f15a40) Stream added, broadcasting: 5 I0318 14:23:58.535164 6 log.go:172] (0xc002de3550) Reply frame received for 5 I0318 14:23:59.620749 6 log.go:172] (0xc002de3550) Data frame received for 3 I0318 14:23:59.620792 6 log.go:172] (0xc002f159a0) (3) Data frame handling I0318 14:23:59.620811 6 log.go:172] (0xc002f159a0) (3) Data frame sent I0318 14:23:59.620824 6 log.go:172] (0xc002de3550) Data frame received for 3 I0318 14:23:59.620836 6 log.go:172] (0xc002f159a0) (3) Data frame handling I0318 14:23:59.620877 6 log.go:172] (0xc002de3550) Data frame received for 5 I0318 14:23:59.620916 6 log.go:172] (0xc002f15a40) (5) Data frame handling I0318 14:23:59.622547 6 log.go:172] (0xc002de3550) Data frame received for 1 I0318 14:23:59.622578 6 log.go:172] (0xc0026a80a0) (1) Data frame handling I0318 14:23:59.622595 6 log.go:172] (0xc0026a80a0) (1) Data frame sent I0318 14:23:59.622699 6 log.go:172] (0xc002de3550) (0xc0026a80a0) Stream removed, broadcasting: 1 I0318 14:23:59.622734 6 log.go:172] (0xc002de3550) Go away received I0318 14:23:59.622851 6 log.go:172] (0xc002de3550) (0xc0026a80a0) Stream removed, broadcasting: 1 I0318 14:23:59.622891 6 log.go:172] (0xc002de3550) (0xc002f159a0) Stream removed, broadcasting: 3 I0318 14:23:59.622933 6 log.go:172] (0xc002de3550) (0xc002f15a40) Stream removed, broadcasting: 5 Mar 18 14:23:59.622: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:23:59.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9878" for this suite. Mar 18 14:24:21.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:24:21.717: INFO: namespace pod-network-test-9878 deletion completed in 22.090340912s • [SLOW TEST:44.491 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:24:21.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:24:21.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6884" for this suite. Mar 18 14:24:27.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:24:27.935: INFO: namespace kubelet-test-6884 deletion completed in 6.11273905s • [SLOW TEST:6.217 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:24:27.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Mar 18 14:24:27.975: INFO: Waiting up to 5m0s for pod "client-containers-af3656e1-fc20-4ad7-84c7-c19ed97a227b" in namespace "containers-1097" to be "success or failure" Mar 18 14:24:27.991: INFO: Pod "client-containers-af3656e1-fc20-4ad7-84c7-c19ed97a227b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.475849ms Mar 18 14:24:29.996: INFO: Pod "client-containers-af3656e1-fc20-4ad7-84c7-c19ed97a227b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020913962s Mar 18 14:24:32.000: INFO: Pod "client-containers-af3656e1-fc20-4ad7-84c7-c19ed97a227b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024773831s STEP: Saw pod success Mar 18 14:24:32.000: INFO: Pod "client-containers-af3656e1-fc20-4ad7-84c7-c19ed97a227b" satisfied condition "success or failure" Mar 18 14:24:32.003: INFO: Trying to get logs from node iruya-worker pod client-containers-af3656e1-fc20-4ad7-84c7-c19ed97a227b container test-container: STEP: delete the pod Mar 18 14:24:32.088: INFO: Waiting for pod client-containers-af3656e1-fc20-4ad7-84c7-c19ed97a227b to disappear Mar 18 14:24:32.092: INFO: Pod client-containers-af3656e1-fc20-4ad7-84c7-c19ed97a227b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:24:32.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1097" for this suite. Mar 18 14:24:38.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:24:38.186: INFO: namespace containers-1097 deletion completed in 6.090693696s • [SLOW TEST:10.250 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:24:38.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Mar 18 14:24:38.298: INFO: Waiting up to 5m0s for pod "var-expansion-3775fcbe-005b-412b-833a-41682e1c3941" in namespace "var-expansion-2605" to be "success or failure" Mar 18 14:24:38.302: INFO: Pod "var-expansion-3775fcbe-005b-412b-833a-41682e1c3941": Phase="Pending", Reason="", readiness=false. Elapsed: 3.359868ms Mar 18 14:24:40.306: INFO: Pod "var-expansion-3775fcbe-005b-412b-833a-41682e1c3941": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007746386s Mar 18 14:24:42.310: INFO: Pod "var-expansion-3775fcbe-005b-412b-833a-41682e1c3941": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011738709s STEP: Saw pod success Mar 18 14:24:42.310: INFO: Pod "var-expansion-3775fcbe-005b-412b-833a-41682e1c3941" satisfied condition "success or failure" Mar 18 14:24:42.313: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-3775fcbe-005b-412b-833a-41682e1c3941 container dapi-container: STEP: delete the pod Mar 18 14:24:42.335: INFO: Waiting for pod var-expansion-3775fcbe-005b-412b-833a-41682e1c3941 to disappear Mar 18 14:24:42.338: INFO: Pod var-expansion-3775fcbe-005b-412b-833a-41682e1c3941 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:24:42.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2605" for this suite. Mar 18 14:24:48.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:24:48.430: INFO: namespace var-expansion-2605 deletion completed in 6.088775653s • [SLOW TEST:10.244 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:24:48.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 14:24:48.508: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48a4e4b1-5b67-4989-98bd-1091f9ba6648" in namespace "projected-1674" to be "success or failure" Mar 18 14:24:48.523: INFO: Pod "downwardapi-volume-48a4e4b1-5b67-4989-98bd-1091f9ba6648": Phase="Pending", Reason="", readiness=false. Elapsed: 14.97239ms Mar 18 14:24:50.527: INFO: Pod "downwardapi-volume-48a4e4b1-5b67-4989-98bd-1091f9ba6648": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018920575s Mar 18 14:24:52.531: INFO: Pod "downwardapi-volume-48a4e4b1-5b67-4989-98bd-1091f9ba6648": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02337915s STEP: Saw pod success Mar 18 14:24:52.531: INFO: Pod "downwardapi-volume-48a4e4b1-5b67-4989-98bd-1091f9ba6648" satisfied condition "success or failure" Mar 18 14:24:52.534: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-48a4e4b1-5b67-4989-98bd-1091f9ba6648 container client-container: STEP: delete the pod Mar 18 14:24:52.575: INFO: Waiting for pod downwardapi-volume-48a4e4b1-5b67-4989-98bd-1091f9ba6648 to disappear Mar 18 14:24:52.585: INFO: Pod downwardapi-volume-48a4e4b1-5b67-4989-98bd-1091f9ba6648 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:24:52.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1674" for this suite. Mar 18 14:24:58.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:24:58.681: INFO: namespace projected-1674 deletion completed in 6.093114523s • [SLOW TEST:10.251 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:24:58.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6644 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-6644 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6644 Mar 18 14:24:58.760: INFO: Found 0 stateful pods, waiting for 1 Mar 18 14:25:08.765: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 18 14:25:08.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 14:25:09.005: INFO: stderr: "I0318 14:25:08.890025 2879 log.go:172] (0xc00013adc0) (0xc000380820) Create stream\nI0318 14:25:08.890086 2879 log.go:172] (0xc00013adc0) (0xc000380820) Stream added, broadcasting: 1\nI0318 14:25:08.893987 2879 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0318 14:25:08.894033 2879 log.go:172] (0xc00013adc0) (0xc000380000) Create stream\nI0318 14:25:08.894058 2879 log.go:172] (0xc00013adc0) (0xc000380000) Stream added, broadcasting: 3\nI0318 14:25:08.894955 2879 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0318 14:25:08.895001 2879 log.go:172] (0xc00013adc0) (0xc0006bc140) Create stream\nI0318 14:25:08.895016 2879 log.go:172] (0xc00013adc0) (0xc0006bc140) Stream added, broadcasting: 5\nI0318 14:25:08.895992 2879 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0318 14:25:08.972770 2879 log.go:172] (0xc00013adc0) Data frame received for 5\nI0318 14:25:08.972793 2879 log.go:172] (0xc0006bc140) (5) Data frame handling\nI0318 14:25:08.972806 2879 log.go:172] (0xc0006bc140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0318 14:25:08.999379 2879 log.go:172] (0xc00013adc0) Data frame received for 5\nI0318 14:25:08.999434 2879 log.go:172] (0xc0006bc140) (5) Data frame handling\nI0318 14:25:08.999461 2879 log.go:172] (0xc00013adc0) Data frame received for 3\nI0318 14:25:08.999472 2879 log.go:172] (0xc000380000) (3) Data frame handling\nI0318 14:25:08.999483 2879 log.go:172] (0xc000380000) (3) Data frame sent\nI0318 14:25:08.999636 2879 log.go:172] (0xc00013adc0) Data frame received for 3\nI0318 14:25:08.999667 2879 log.go:172] (0xc000380000) (3) Data frame handling\nI0318 14:25:09.001539 2879 log.go:172] (0xc00013adc0) Data frame received for 1\nI0318 14:25:09.001578 2879 log.go:172] (0xc000380820) (1) Data frame handling\nI0318 14:25:09.001667 2879 log.go:172] (0xc000380820) (1) Data frame sent\nI0318 14:25:09.001712 2879 log.go:172] (0xc00013adc0) (0xc000380820) Stream removed, broadcasting: 1\nI0318 14:25:09.001747 2879 log.go:172] (0xc00013adc0) Go away received\nI0318 14:25:09.002071 2879 log.go:172] (0xc00013adc0) (0xc000380820) Stream removed, broadcasting: 1\nI0318 14:25:09.002103 2879 log.go:172] (0xc00013adc0) (0xc000380000) Stream removed, broadcasting: 3\nI0318 14:25:09.002118 2879 log.go:172] (0xc00013adc0) (0xc0006bc140) Stream removed, broadcasting: 5\n" Mar 18 14:25:09.006: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 14:25:09.006: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 14:25:09.009: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 18 14:25:19.014: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 18 14:25:19.014: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 14:25:19.027: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 14:25:19.027: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC }] Mar 18 14:25:19.027: INFO: Mar 18 14:25:19.027: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 18 14:25:20.032: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996995115s Mar 18 14:25:21.166: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991949707s Mar 18 14:25:22.170: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.858538424s Mar 18 14:25:23.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.854162466s Mar 18 14:25:24.180: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.849620678s Mar 18 14:25:25.190: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.844628094s Mar 18 14:25:26.195: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.834567558s Mar 18 14:25:27.199: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.829653287s Mar 18 14:25:28.204: INFO: Verifying statefulset ss doesn't scale past 3 for another 825.636027ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6644 Mar 18 14:25:29.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:25:29.419: INFO: stderr: "I0318 14:25:29.330083 2902 log.go:172] (0xc0001168f0) (0xc000730c80) Create stream\nI0318 14:25:29.330130 2902 log.go:172] (0xc0001168f0) (0xc000730c80) Stream added, broadcasting: 1\nI0318 14:25:29.332570 2902 log.go:172] (0xc0001168f0) Reply frame received for 1\nI0318 14:25:29.332613 2902 log.go:172] (0xc0001168f0) (0xc000862000) Create stream\nI0318 14:25:29.332626 2902 log.go:172] (0xc0001168f0) (0xc000862000) Stream added, broadcasting: 3\nI0318 14:25:29.334020 2902 log.go:172] (0xc0001168f0) Reply frame received for 3\nI0318 14:25:29.334066 2902 log.go:172] (0xc0001168f0) (0xc000730d20) Create stream\nI0318 14:25:29.334080 2902 log.go:172] (0xc0001168f0) (0xc000730d20) Stream added, broadcasting: 5\nI0318 14:25:29.335189 2902 log.go:172] (0xc0001168f0) Reply frame received for 5\nI0318 14:25:29.412410 2902 log.go:172] (0xc0001168f0) Data frame received for 5\nI0318 14:25:29.412468 2902 log.go:172] (0xc0001168f0) Data frame received for 3\nI0318 14:25:29.412520 2902 log.go:172] (0xc000862000) (3) Data frame handling\nI0318 14:25:29.412545 2902 log.go:172] (0xc000862000) (3) Data frame sent\nI0318 14:25:29.412592 2902 log.go:172] (0xc000730d20) (5) Data frame handling\nI0318 14:25:29.412617 2902 log.go:172] (0xc000730d20) (5) Data frame sent\nI0318 14:25:29.412635 2902 log.go:172] (0xc0001168f0) Data frame received for 5\nI0318 14:25:29.412649 2902 log.go:172] (0xc000730d20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0318 14:25:29.412685 2902 log.go:172] (0xc0001168f0) Data frame received for 3\nI0318 14:25:29.412736 2902 log.go:172] (0xc000862000) (3) Data frame handling\nI0318 14:25:29.414770 2902 log.go:172] (0xc0001168f0) Data frame received for 1\nI0318 14:25:29.414798 2902 log.go:172] (0xc000730c80) (1) Data frame handling\nI0318 14:25:29.414814 2902 log.go:172] (0xc000730c80) (1) Data frame sent\nI0318 14:25:29.414849 2902 log.go:172] (0xc0001168f0) (0xc000730c80) Stream removed, broadcasting: 1\nI0318 14:25:29.414875 2902 log.go:172] (0xc0001168f0) Go away received\nI0318 14:25:29.415291 2902 log.go:172] (0xc0001168f0) (0xc000730c80) Stream removed, broadcasting: 1\nI0318 14:25:29.415329 2902 log.go:172] (0xc0001168f0) (0xc000862000) Stream removed, broadcasting: 3\nI0318 14:25:29.415344 2902 log.go:172] (0xc0001168f0) (0xc000730d20) Stream removed, broadcasting: 5\n" Mar 18 14:25:29.419: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 14:25:29.419: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 14:25:29.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:25:29.620: INFO: stderr: "I0318 14:25:29.556523 2924 log.go:172] (0xc0009e6420) (0xc00052c8c0) Create stream\nI0318 14:25:29.556579 2924 log.go:172] (0xc0009e6420) (0xc00052c8c0) Stream added, broadcasting: 1\nI0318 14:25:29.560432 2924 log.go:172] (0xc0009e6420) Reply frame received for 1\nI0318 14:25:29.560464 2924 log.go:172] (0xc0009e6420) (0xc000350320) Create stream\nI0318 14:25:29.560473 2924 log.go:172] (0xc0009e6420) (0xc000350320) Stream added, broadcasting: 3\nI0318 14:25:29.561503 2924 log.go:172] (0xc0009e6420) Reply frame received for 3\nI0318 14:25:29.561539 2924 log.go:172] (0xc0009e6420) (0xc00052c000) Create stream\nI0318 14:25:29.561547 2924 log.go:172] (0xc0009e6420) (0xc00052c000) Stream added, broadcasting: 5\nI0318 14:25:29.562391 2924 log.go:172] (0xc0009e6420) Reply frame received for 5\nI0318 14:25:29.615679 2924 log.go:172] (0xc0009e6420) Data frame received for 3\nI0318 14:25:29.615709 2924 log.go:172] (0xc000350320) (3) Data frame handling\nI0318 14:25:29.615718 2924 log.go:172] (0xc000350320) (3) Data frame sent\nI0318 14:25:29.615723 2924 log.go:172] (0xc0009e6420) Data frame received for 3\nI0318 14:25:29.615728 2924 log.go:172] (0xc000350320) (3) Data frame handling\nI0318 14:25:29.615752 2924 log.go:172] (0xc0009e6420) Data frame received for 5\nI0318 14:25:29.615759 2924 log.go:172] (0xc00052c000) (5) Data frame handling\nI0318 14:25:29.615764 2924 log.go:172] (0xc00052c000) (5) Data frame sent\nI0318 14:25:29.615771 2924 log.go:172] (0xc0009e6420) Data frame received for 5\nI0318 14:25:29.615781 2924 log.go:172] (0xc00052c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0318 14:25:29.616797 2924 log.go:172] (0xc0009e6420) Data frame received for 1\nI0318 14:25:29.616812 2924 log.go:172] (0xc00052c8c0) (1) Data frame handling\nI0318 14:25:29.616822 2924 log.go:172] (0xc00052c8c0) (1) Data frame sent\nI0318 14:25:29.616832 2924 log.go:172] (0xc0009e6420) (0xc00052c8c0) Stream removed, broadcasting: 1\nI0318 14:25:29.616840 2924 log.go:172] (0xc0009e6420) Go away received\nI0318 14:25:29.617286 2924 log.go:172] (0xc0009e6420) (0xc00052c8c0) Stream removed, broadcasting: 1\nI0318 14:25:29.617309 2924 log.go:172] (0xc0009e6420) (0xc000350320) Stream removed, broadcasting: 3\nI0318 14:25:29.617318 2924 log.go:172] (0xc0009e6420) (0xc00052c000) Stream removed, broadcasting: 5\n" Mar 18 14:25:29.620: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 14:25:29.620: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 14:25:29.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:25:29.831: INFO: stderr: "I0318 14:25:29.761894 2946 log.go:172] (0xc0008a8420) (0xc00037e820) Create stream\nI0318 14:25:29.761951 2946 log.go:172] (0xc0008a8420) (0xc00037e820) Stream added, broadcasting: 1\nI0318 14:25:29.763724 2946 log.go:172] (0xc0008a8420) Reply frame received for 1\nI0318 14:25:29.763790 2946 log.go:172] (0xc0008a8420) (0xc00092a000) Create stream\nI0318 14:25:29.763829 2946 log.go:172] (0xc0008a8420) (0xc00092a000) Stream added, broadcasting: 3\nI0318 14:25:29.764461 2946 log.go:172] (0xc0008a8420) Reply frame received for 3\nI0318 14:25:29.764496 2946 log.go:172] (0xc0008a8420) (0xc00037e8c0) Create stream\nI0318 14:25:29.764508 2946 log.go:172] (0xc0008a8420) (0xc00037e8c0) Stream added, broadcasting: 5\nI0318 14:25:29.765256 2946 log.go:172] (0xc0008a8420) Reply frame received for 5\nI0318 14:25:29.826698 2946 log.go:172] (0xc0008a8420) Data frame received for 5\nI0318 14:25:29.826734 2946 log.go:172] (0xc00037e8c0) (5) Data frame handling\nI0318 14:25:29.826758 2946 log.go:172] (0xc0008a8420) Data frame received for 3\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0318 14:25:29.826784 2946 log.go:172] (0xc00092a000) (3) Data frame handling\nI0318 14:25:29.826798 2946 log.go:172] (0xc00092a000) (3) Data frame sent\nI0318 14:25:29.826819 2946 log.go:172] (0xc0008a8420) Data frame received for 3\nI0318 14:25:29.826828 2946 log.go:172] (0xc00092a000) (3) Data frame handling\nI0318 14:25:29.826860 2946 log.go:172] (0xc00037e8c0) (5) Data frame sent\nI0318 14:25:29.826871 2946 log.go:172] (0xc0008a8420) Data frame received for 5\nI0318 14:25:29.826881 2946 log.go:172] (0xc00037e8c0) (5) Data frame handling\nI0318 14:25:29.828021 2946 log.go:172] (0xc0008a8420) Data frame received for 1\nI0318 14:25:29.828056 2946 log.go:172] (0xc00037e820) (1) Data frame handling\nI0318 14:25:29.828079 2946 log.go:172] (0xc00037e820) (1) Data frame sent\nI0318 14:25:29.828097 2946 log.go:172] (0xc0008a8420) (0xc00037e820) Stream removed, broadcasting: 1\nI0318 14:25:29.828121 2946 log.go:172] (0xc0008a8420) Go away received\nI0318 14:25:29.828393 2946 log.go:172] (0xc0008a8420) (0xc00037e820) Stream removed, broadcasting: 1\nI0318 14:25:29.828408 2946 log.go:172] (0xc0008a8420) (0xc00092a000) Stream removed, broadcasting: 3\nI0318 14:25:29.828415 2946 log.go:172] (0xc0008a8420) (0xc00037e8c0) Stream removed, broadcasting: 5\n" Mar 18 14:25:29.831: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 14:25:29.831: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 14:25:29.835: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 18 14:25:39.841: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 14:25:39.841: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 14:25:39.841: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 18 14:25:39.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 14:25:40.058: INFO: stderr: "I0318 14:25:39.976155 2965 log.go:172] (0xc00012adc0) (0xc00070a6e0) Create stream\nI0318 14:25:39.976219 2965 log.go:172] (0xc00012adc0) (0xc00070a6e0) Stream added, broadcasting: 1\nI0318 14:25:39.978754 2965 log.go:172] (0xc00012adc0) Reply frame received for 1\nI0318 14:25:39.978792 2965 log.go:172] (0xc00012adc0) (0xc000a10000) Create stream\nI0318 14:25:39.978812 2965 log.go:172] (0xc00012adc0) (0xc000a10000) Stream added, broadcasting: 3\nI0318 14:25:39.980059 2965 log.go:172] (0xc00012adc0) Reply frame received for 3\nI0318 14:25:39.980230 2965 log.go:172] (0xc00012adc0) (0xc000a2e000) Create stream\nI0318 14:25:39.980265 2965 log.go:172] (0xc00012adc0) (0xc000a2e000) Stream added, broadcasting: 5\nI0318 14:25:39.981353 2965 log.go:172] (0xc00012adc0) Reply frame received for 5\nI0318 14:25:40.052833 2965 log.go:172] (0xc00012adc0) Data frame received for 3\nI0318 14:25:40.052879 2965 log.go:172] (0xc000a10000) (3) Data frame handling\nI0318 14:25:40.052892 2965 log.go:172] (0xc000a10000) (3) Data frame sent\nI0318 14:25:40.052899 2965 log.go:172] (0xc00012adc0) Data frame received for 3\nI0318 14:25:40.052905 2965 log.go:172] (0xc000a10000) (3) Data frame handling\nI0318 14:25:40.052934 2965 log.go:172] (0xc00012adc0) Data frame received for 5\nI0318 14:25:40.052942 2965 log.go:172] (0xc000a2e000) (5) Data frame handling\nI0318 14:25:40.052949 2965 log.go:172] (0xc000a2e000) (5) Data frame sent\nI0318 14:25:40.052955 2965 log.go:172] (0xc00012adc0) Data frame received for 5\nI0318 14:25:40.052962 2965 log.go:172] (0xc000a2e000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0318 14:25:40.054275 2965 log.go:172] (0xc00012adc0) Data frame received for 1\nI0318 14:25:40.054294 2965 log.go:172] (0xc00070a6e0) (1) Data frame handling\nI0318 14:25:40.054313 2965 log.go:172] (0xc00070a6e0) (1) Data frame sent\nI0318 14:25:40.054328 2965 log.go:172] (0xc00012adc0) (0xc00070a6e0) Stream removed, broadcasting: 1\nI0318 14:25:40.054344 2965 log.go:172] (0xc00012adc0) Go away received\nI0318 14:25:40.054921 2965 log.go:172] (0xc00012adc0) (0xc00070a6e0) Stream removed, broadcasting: 1\nI0318 14:25:40.054962 2965 log.go:172] (0xc00012adc0) (0xc000a10000) Stream removed, broadcasting: 3\nI0318 14:25:40.054988 2965 log.go:172] (0xc00012adc0) (0xc000a2e000) Stream removed, broadcasting: 5\n" Mar 18 14:25:40.058: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 14:25:40.058: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 14:25:40.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 14:25:40.283: INFO: stderr: "I0318 14:25:40.186683 2986 log.go:172] (0xc00075a9a0) (0xc00065e820) Create stream\nI0318 14:25:40.186752 2986 log.go:172] (0xc00075a9a0) (0xc00065e820) Stream added, broadcasting: 1\nI0318 14:25:40.189100 2986 log.go:172] (0xc00075a9a0) Reply frame received for 1\nI0318 14:25:40.189234 2986 log.go:172] (0xc00075a9a0) (0xc0004ec000) Create stream\nI0318 14:25:40.189247 2986 log.go:172] (0xc00075a9a0) (0xc0004ec000) Stream added, broadcasting: 3\nI0318 14:25:40.190186 2986 log.go:172] (0xc00075a9a0) Reply frame received for 3\nI0318 14:25:40.190228 2986 log.go:172] (0xc00075a9a0) (0xc00065e8c0) Create stream\nI0318 14:25:40.190241 2986 log.go:172] (0xc00075a9a0) (0xc00065e8c0) Stream added, broadcasting: 5\nI0318 14:25:40.191286 2986 log.go:172] (0xc00075a9a0) Reply frame received for 5\nI0318 14:25:40.252445 2986 log.go:172] (0xc00075a9a0) Data frame received for 5\nI0318 14:25:40.252477 2986 log.go:172] (0xc00065e8c0) (5) Data frame handling\nI0318 14:25:40.252496 2986 log.go:172] (0xc00065e8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0318 14:25:40.277717 2986 log.go:172] (0xc00075a9a0) Data frame received for 3\nI0318 14:25:40.277758 2986 log.go:172] (0xc0004ec000) (3) Data frame handling\nI0318 14:25:40.277776 2986 log.go:172] (0xc0004ec000) (3) Data frame sent\nI0318 14:25:40.277790 2986 log.go:172] (0xc00075a9a0) Data frame received for 3\nI0318 14:25:40.277802 2986 log.go:172] (0xc0004ec000) (3) Data frame handling\nI0318 14:25:40.278155 2986 log.go:172] (0xc00075a9a0) Data frame received for 5\nI0318 14:25:40.278176 2986 log.go:172] (0xc00065e8c0) (5) Data frame handling\nI0318 14:25:40.280067 2986 log.go:172] (0xc00075a9a0) Data frame received for 1\nI0318 14:25:40.280101 2986 log.go:172] (0xc00065e820) (1) Data frame handling\nI0318 14:25:40.280121 2986 log.go:172] (0xc00065e820) (1) Data frame sent\nI0318 14:25:40.280154 2986 log.go:172] (0xc00075a9a0) (0xc00065e820) Stream removed, broadcasting: 1\nI0318 14:25:40.280179 2986 log.go:172] (0xc00075a9a0) Go away received\nI0318 14:25:40.280558 2986 log.go:172] (0xc00075a9a0) (0xc00065e820) Stream removed, broadcasting: 1\nI0318 14:25:40.280577 2986 log.go:172] (0xc00075a9a0) (0xc0004ec000) Stream removed, broadcasting: 3\nI0318 14:25:40.280585 2986 log.go:172] (0xc00075a9a0) (0xc00065e8c0) Stream removed, broadcasting: 5\n" Mar 18 14:25:40.283: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 14:25:40.283: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 14:25:40.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 14:25:40.524: INFO: stderr: "I0318 14:25:40.411163 3008 log.go:172] (0xc0002fc420) (0xc0002e06e0) Create stream\nI0318 14:25:40.411219 3008 log.go:172] (0xc0002fc420) (0xc0002e06e0) Stream added, broadcasting: 1\nI0318 14:25:40.413574 3008 log.go:172] (0xc0002fc420) Reply frame received for 1\nI0318 14:25:40.413614 3008 log.go:172] (0xc0002fc420) (0xc0006783c0) Create stream\nI0318 14:25:40.413641 3008 log.go:172] (0xc0002fc420) (0xc0006783c0) Stream added, broadcasting: 3\nI0318 14:25:40.414456 3008 log.go:172] (0xc0002fc420) Reply frame received for 3\nI0318 14:25:40.414487 3008 log.go:172] (0xc0002fc420) (0xc0002e0780) Create stream\nI0318 14:25:40.414498 3008 log.go:172] (0xc0002fc420) (0xc0002e0780) Stream added, broadcasting: 5\nI0318 14:25:40.415356 3008 log.go:172] (0xc0002fc420) Reply frame received for 5\nI0318 14:25:40.485583 3008 log.go:172] (0xc0002fc420) Data frame received for 5\nI0318 14:25:40.485612 3008 log.go:172] (0xc0002e0780) (5) Data frame handling\nI0318 14:25:40.485644 3008 log.go:172] (0xc0002e0780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0318 14:25:40.517842 3008 log.go:172] (0xc0002fc420) Data frame received for 3\nI0318 14:25:40.517871 3008 log.go:172] (0xc0006783c0) (3) Data frame handling\nI0318 14:25:40.517897 3008 log.go:172] (0xc0006783c0) (3) Data frame sent\nI0318 14:25:40.517917 3008 log.go:172] (0xc0002fc420) Data frame received for 3\nI0318 14:25:40.517938 3008 log.go:172] (0xc0006783c0) (3) Data frame handling\nI0318 14:25:40.518117 3008 log.go:172] (0xc0002fc420) Data frame received for 5\nI0318 14:25:40.518130 3008 log.go:172] (0xc0002e0780) (5) Data frame handling\nI0318 14:25:40.519923 3008 log.go:172] (0xc0002fc420) Data frame received for 1\nI0318 14:25:40.519939 3008 log.go:172] (0xc0002e06e0) (1) Data frame handling\nI0318 14:25:40.519954 3008 log.go:172] (0xc0002e06e0) (1) Data frame sent\nI0318 14:25:40.519965 3008 log.go:172] (0xc0002fc420) (0xc0002e06e0) Stream removed, broadcasting: 1\nI0318 14:25:40.519976 3008 log.go:172] (0xc0002fc420) Go away received\nI0318 14:25:40.520420 3008 log.go:172] (0xc0002fc420) (0xc0002e06e0) Stream removed, broadcasting: 1\nI0318 14:25:40.520442 3008 log.go:172] (0xc0002fc420) (0xc0006783c0) Stream removed, broadcasting: 3\nI0318 14:25:40.520453 3008 log.go:172] (0xc0002fc420) (0xc0002e0780) Stream removed, broadcasting: 5\n" Mar 18 14:25:40.524: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 14:25:40.524: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 14:25:40.524: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 14:25:40.527: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 18 14:25:50.535: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 18 14:25:50.535: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 18 14:25:50.535: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 18 14:25:50.561: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 14:25:50.561: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC }] Mar 18 14:25:50.561: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:50.561: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:50.561: INFO: Mar 18 14:25:50.561: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 14:25:51.651: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 14:25:51.651: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC }] Mar 18 14:25:51.651: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:51.651: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:51.651: INFO: Mar 18 14:25:51.651: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 14:25:52.655: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 14:25:52.655: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC }] Mar 18 14:25:52.655: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:52.655: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:52.655: INFO: Mar 18 14:25:52.655: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 14:25:53.659: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 14:25:53.659: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC }] Mar 18 14:25:53.659: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:53.659: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:53.659: INFO: Mar 18 14:25:53.659: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 14:25:54.664: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 14:25:54.664: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC }] Mar 18 14:25:54.664: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:54.664: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:54.664: INFO: Mar 18 14:25:54.664: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 14:25:55.669: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 14:25:55.669: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC }] Mar 18 14:25:55.669: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:55.669: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:55.669: INFO: Mar 18 14:25:55.669: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 14:25:56.674: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 14:25:56.674: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC }] Mar 18 14:25:56.674: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:56.674: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:56.674: INFO: Mar 18 14:25:56.674: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 14:25:57.679: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 14:25:57.679: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC }] Mar 18 14:25:57.679: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:57.679: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:57.679: INFO: Mar 18 14:25:57.679: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 14:25:58.685: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 14:25:58.685: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC }] Mar 18 14:25:58.685: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:58.685: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:58.685: INFO: Mar 18 14:25:58.685: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 18 14:25:59.690: INFO: POD NODE PHASE GRACE CONDITIONS Mar 18 14:25:59.690: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:24:58 +0000 UTC }] Mar 18 14:25:59.690: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:59.690: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-18 14:25:19 +0000 UTC }] Mar 18 14:25:59.690: INFO: Mar 18 14:25:59.690: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6644 Mar 18 14:26:00.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:26:00.824: INFO: rc: 1 Mar 18 14:26:00.824: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00277e450 exit status 1 true [0xc0033708b0 0xc0033708c8 0xc0033708e0] [0xc0033708b0 0xc0033708c8 0xc0033708e0] [0xc0033708c0 0xc0033708d8] [0xba70e0 0xba70e0] 0xc002617380 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Mar 18 14:26:10.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:26:10.918: INFO: rc: 1 Mar 18 14:26:10.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e1c0f0 exit status 1 true [0xc00263e048 0xc00263e080 0xc00263e0b8] [0xc00263e048 0xc00263e080 0xc00263e0b8] [0xc00263e078 0xc00263e0a0] [0xba70e0 0xba70e0] 0xc0023e05a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:26:20.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:26:21.011: INFO: rc: 1 Mar 18 14:26:21.011: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003610240 exit status 1 true [0xc00019e008 0xc00019e4a0 0xc00019e510] [0xc00019e008 0xc00019e4a0 0xc00019e510] [0xc00019e468 0xc00019e4e8] [0xba70e0 0xba70e0] 0xc00336c360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:26:31.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:26:31.116: INFO: rc: 1 Mar 18 14:26:31.116: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0009ae0c0 exit status 1 true [0xc000010090 0xc0000102a8 0xc000010328] [0xc000010090 0xc0000102a8 0xc000010328] [0xc000010290 0xc0000102f0] [0xba70e0 0xba70e0] 0xc002f78540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:26:41.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:26:41.212: INFO: rc: 1 Mar 18 14:26:41.212: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0009ae180 exit status 1 true [0xc000010368 0xc0000103e0 0xc000010540] [0xc000010368 0xc0000103e0 0xc000010540] [0xc000010390 0xc000010470] [0xba70e0 0xba70e0] 0xc002f78840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:26:51.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:26:51.305: INFO: rc: 1 Mar 18 14:26:51.305: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0027ae0c0 exit status 1 true [0xc0000e84a8 0xc0000e8620 0xc0000e8bb8] [0xc0000e84a8 0xc0000e8620 0xc0000e8bb8] [0xc0000e8610 0xc0000e8ba0] [0xba70e0 0xba70e0] 0xc002a00c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:27:01.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:27:01.401: INFO: rc: 1 Mar 18 14:27:01.401: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0036103c0 exit status 1 true [0xc00019e580 0xc00019ebc0 0xc00019ec58] [0xc00019e580 0xc00019ebc0 0xc00019ec58] [0xc00019eb88 0xc00019ec48] [0xba70e0 0xba70e0] 0xc00336c660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:27:11.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:27:11.492: INFO: rc: 1 Mar 18 14:27:11.492: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002e56120 exit status 1 true [0xc000708228 0xc000708300 0xc0007083c8] [0xc000708228 0xc000708300 0xc0007083c8] [0xc000708288 0xc000708390] [0xba70e0 0xba70e0] 0xc0025344e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:27:21.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:27:21.588: INFO: rc: 1 Mar 18 14:27:21.588: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002e56210 exit status 1 true [0xc000708450 0xc000708598 0xc000708640] [0xc000708450 0xc000708598 0xc000708640] [0xc000708540 0xc000708610] [0xba70e0 0xba70e0] 0xc0025349c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:27:31.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:27:31.695: INFO: rc: 1 Mar 18 14:27:31.695: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002e56300 exit status 1 true [0xc000708650 0xc0007087b8 0xc0007088d0] [0xc000708650 0xc0007087b8 0xc0007088d0] [0xc000708700 0xc000708850] [0xba70e0 0xba70e0] 0xc002534de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:27:41.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:27:41.796: INFO: rc: 1 Mar 18 14:27:41.796: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0036104b0 exit status 1 true [0xc00019ec78 0xc00019ecf0 0xc00019ed40] [0xc00019ec78 0xc00019ecf0 0xc00019ed40] [0xc00019ecd0 0xc00019ed28] [0xba70e0 0xba70e0] 0xc00336c960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:27:51.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:27:51.894: INFO: rc: 1 Mar 18 14:27:51.895: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003610570 exit status 1 true [0xc00019ed68 0xc00019eda0 0xc00019ee20] [0xc00019ed68 0xc00019eda0 0xc00019ee20] [0xc00019ed90 0xc00019ee08] [0xba70e0 0xba70e0] 0xc00336ccc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:28:01.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:28:01.993: INFO: rc: 1 Mar 18 14:28:01.993: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002e563f0 exit status 1 true [0xc0007088f0 0xc000708990 0xc000708c00] [0xc0007088f0 0xc000708990 0xc000708c00] [0xc000708940 0xc000708b40] [0xba70e0 0xba70e0] 0xc002535200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:28:11.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:28:12.086: INFO: rc: 1 Mar 18 14:28:12.086: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002e56090 exit status 1 true [0xc000708268 0xc000708350 0xc000708450] [0xc000708268 0xc000708350 0xc000708450] [0xc000708300 0xc0007083c8] [0xba70e0 0xba70e0] 0xc0025344e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:28:22.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:28:22.182: INFO: rc: 1 Mar 18 14:28:22.182: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002e56180 exit status 1 true [0xc0007084f8 0xc0007085a8 0xc000708650] [0xc0007084f8 0xc0007085a8 0xc000708650] [0xc000708598 0xc000708640] [0xba70e0 0xba70e0] 0xc0025349c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:28:32.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:28:32.278: INFO: rc: 1 Mar 18 14:28:32.279: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003610270 exit status 1 true [0xc0000e84a8 0xc0000e8620 0xc0000e8bb8] [0xc0000e84a8 0xc0000e8620 0xc0000e8bb8] [0xc0000e8610 0xc0000e8ba0] [0xba70e0 0xba70e0] 0xc002a00c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:28:42.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:28:42.379: INFO: rc: 1 Mar 18 14:28:42.379: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0009ae090 exit status 1 true [0xc00019e008 0xc00019e4a0 0xc00019e510] [0xc00019e008 0xc00019e4a0 0xc00019e510] [0xc00019e468 0xc00019e4e8] [0xba70e0 0xba70e0] 0xc00336c360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:28:52.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:28:52.482: INFO: rc: 1 Mar 18 14:28:52.482: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0027ae0f0 exit status 1 true [0xc000010090 0xc0000102a8 0xc000010328] [0xc000010090 0xc0000102a8 0xc000010328] [0xc000010290 0xc0000102f0] [0xba70e0 0xba70e0] 0xc002f78540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:29:02.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:29:02.579: INFO: rc: 1 Mar 18 14:29:02.579: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0009ae1b0 exit status 1 true [0xc00019e580 0xc00019ebc0 0xc00019ec58] [0xc00019e580 0xc00019ebc0 0xc00019ec58] [0xc00019eb88 0xc00019ec48] [0xba70e0 0xba70e0] 0xc00336c660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:29:12.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:29:12.669: INFO: rc: 1 Mar 18 14:29:12.669: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0027ae1e0 exit status 1 true [0xc000010368 0xc0000103e0 0xc000010540] [0xc000010368 0xc0000103e0 0xc000010540] [0xc000010390 0xc000010470] [0xba70e0 0xba70e0] 0xc002f78840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:29:22.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:29:22.760: INFO: rc: 1 Mar 18 14:29:22.761: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003610510 exit status 1 true [0xc0000e8d68 0xc0000e9200 0xc0000e9538] [0xc0000e8d68 0xc0000e9200 0xc0000e9538] [0xc0000e9198 0xc0000e9340] [0xba70e0 0xba70e0] 0xc002a00f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:29:32.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:29:32.854: INFO: rc: 1 Mar 18 14:29:32.854: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0027ae2a0 exit status 1 true [0xc000010580 0xc0000105c0 0xc0000106b8] [0xc000010580 0xc0000105c0 0xc0000106b8] [0xc0000105b0 0xc0000106a8] [0xba70e0 0xba70e0] 0xc002f78b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:29:42.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:29:42.956: INFO: rc: 1 Mar 18 14:29:42.956: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002e56330 exit status 1 true [0xc000708670 0xc000708818 0xc0007088f0] [0xc000708670 0xc000708818 0xc0007088f0] [0xc0007087b8 0xc0007088d0] [0xba70e0 0xba70e0] 0xc002534de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:29:52.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:29:53.052: INFO: rc: 1 Mar 18 14:29:53.052: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0036106c0 exit status 1 true [0xc0000e95f8 0xc0000e96e8 0xc0000e9c10] [0xc0000e95f8 0xc0000e96e8 0xc0000e9c10] [0xc0000e96a8 0xc0000e9b90] [0xba70e0 0xba70e0] 0xc002a01260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:30:03.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:30:03.144: INFO: rc: 1 Mar 18 14:30:03.144: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0027ae390 exit status 1 true [0xc0000106d8 0xc000010700 0xc000010758] [0xc0000106d8 0xc000010700 0xc000010758] [0xc0000106f8 0xc000010730] [0xba70e0 0xba70e0] 0xc002f78e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:30:13.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:30:13.235: INFO: rc: 1 Mar 18 14:30:13.235: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0027ae090 exit status 1 true [0xc000010228 0xc0000102c8 0xc000010368] [0xc000010228 0xc0000102c8 0xc000010368] [0xc0000102a8 0xc000010328] [0xba70e0 0xba70e0] 0xc002f78540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:30:23.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:30:23.335: INFO: rc: 1 Mar 18 14:30:23.335: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0009ae0c0 exit status 1 true [0xc0000e84a8 0xc0000e8620 0xc0000e8bb8] [0xc0000e84a8 0xc0000e8620 0xc0000e8bb8] [0xc0000e8610 0xc0000e8ba0] [0xba70e0 0xba70e0] 0xc002a00c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:30:33.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:30:35.551: INFO: rc: 1 Mar 18 14:30:35.552: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0027ae180 exit status 1 true [0xc000010380 0xc000010408 0xc000010580] [0xc000010380 0xc000010408 0xc000010580] [0xc0000103e0 0xc000010540] [0xba70e0 0xba70e0] 0xc002f78840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:30:45.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:30:45.644: INFO: rc: 1 Mar 18 14:30:45.644: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0009ae210 exit status 1 true [0xc0000e8d68 0xc0000e9200 0xc0000e9538] [0xc0000e8d68 0xc0000e9200 0xc0000e9538] [0xc0000e9198 0xc0000e9340] [0xba70e0 0xba70e0] 0xc002a00f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:30:55.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:30:55.737: INFO: rc: 1 Mar 18 14:30:55.738: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0009ae2d0 exit status 1 true [0xc0000e95f8 0xc0000e96e8 0xc0000e9c10] [0xc0000e95f8 0xc0000e96e8 0xc0000e9c10] [0xc0000e96a8 0xc0000e9b90] [0xba70e0 0xba70e0] 0xc002a01260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 18 14:31:05.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6644 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:31:05.831: INFO: rc: 1 Mar 18 14:31:05.832: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Mar 18 14:31:05.832: INFO: Scaling statefulset ss to 0 Mar 18 14:31:05.839: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 18 14:31:05.841: INFO: Deleting all statefulset in ns statefulset-6644 Mar 18 14:31:05.843: INFO: Scaling statefulset ss to 0 Mar 18 14:31:05.851: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 14:31:05.854: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:31:05.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6644" for this suite. Mar 18 14:31:11.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:31:11.965: INFO: namespace statefulset-6644 deletion completed in 6.089476874s • [SLOW TEST:373.283 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:31:11.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-25ef9865-a815-41b0-9ab6-800e1c3fccb7 Mar 18 14:31:12.045: INFO: Pod name my-hostname-basic-25ef9865-a815-41b0-9ab6-800e1c3fccb7: Found 0 pods out of 1 Mar 18 14:31:17.049: INFO: Pod name my-hostname-basic-25ef9865-a815-41b0-9ab6-800e1c3fccb7: Found 1 pods out of 1 Mar 18 14:31:17.049: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-25ef9865-a815-41b0-9ab6-800e1c3fccb7" are running Mar 18 14:31:17.053: INFO: Pod "my-hostname-basic-25ef9865-a815-41b0-9ab6-800e1c3fccb7-2lpdn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 14:31:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 14:31:15 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 14:31:15 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-18 14:31:12 +0000 UTC Reason: Message:}]) Mar 18 14:31:17.053: INFO: Trying to dial the pod Mar 18 14:31:22.065: INFO: Controller my-hostname-basic-25ef9865-a815-41b0-9ab6-800e1c3fccb7: Got expected result from replica 1 [my-hostname-basic-25ef9865-a815-41b0-9ab6-800e1c3fccb7-2lpdn]: "my-hostname-basic-25ef9865-a815-41b0-9ab6-800e1c3fccb7-2lpdn", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:31:22.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7130" for this suite. Mar 18 14:31:28.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:31:28.184: INFO: namespace replication-controller-7130 deletion completed in 6.114434458s • [SLOW TEST:16.218 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:31:28.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-228b2b14-4e4a-43f1-a2a4-4cff255a9e31 STEP: Creating a pod to test consume configMaps Mar 18 14:31:28.282: INFO: Waiting up to 5m0s for pod "pod-configmaps-e11496bc-5447-4f73-82f8-b0523fa5ed97" in namespace "configmap-5322" to be "success or failure" Mar 18 14:31:28.292: INFO: Pod "pod-configmaps-e11496bc-5447-4f73-82f8-b0523fa5ed97": Phase="Pending", Reason="", readiness=false. Elapsed: 9.518717ms Mar 18 14:31:30.295: INFO: Pod "pod-configmaps-e11496bc-5447-4f73-82f8-b0523fa5ed97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013267957s Mar 18 14:31:32.300: INFO: Pod "pod-configmaps-e11496bc-5447-4f73-82f8-b0523fa5ed97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017871573s STEP: Saw pod success Mar 18 14:31:32.300: INFO: Pod "pod-configmaps-e11496bc-5447-4f73-82f8-b0523fa5ed97" satisfied condition "success or failure" Mar 18 14:31:32.303: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e11496bc-5447-4f73-82f8-b0523fa5ed97 container configmap-volume-test: STEP: delete the pod Mar 18 14:31:32.523: INFO: Waiting for pod pod-configmaps-e11496bc-5447-4f73-82f8-b0523fa5ed97 to disappear Mar 18 14:31:32.525: INFO: Pod pod-configmaps-e11496bc-5447-4f73-82f8-b0523fa5ed97 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:31:32.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5322" for this suite. Mar 18 14:31:38.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:31:38.654: INFO: namespace configmap-5322 deletion completed in 6.125222036s • [SLOW TEST:10.469 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:31:38.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:31:38.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8860" for this suite. Mar 18 14:32:00.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:32:00.913: INFO: namespace pods-8860 deletion completed in 22.125075243s • [SLOW TEST:22.260 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:32:00.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0318 14:32:41.000954 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 18 14:32:41.001: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:32:41.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9255" for this suite. Mar 18 14:32:51.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:32:51.113: INFO: namespace gc-9255 deletion completed in 10.108659672s • [SLOW TEST:50.199 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:32:51.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 18 14:32:51.201: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 18 14:32:51.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:32:51.214: INFO: Number of nodes with available pods: 0 Mar 18 14:32:51.214: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:32:52.239: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:32:52.242: INFO: Number of nodes with available pods: 0 Mar 18 14:32:52.242: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:32:53.219: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:32:53.223: INFO: Number of nodes with available pods: 0 Mar 18 14:32:53.223: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:32:54.218: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:32:54.226: INFO: Number of nodes with available pods: 0 Mar 18 14:32:54.226: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:32:55.219: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:32:55.222: INFO: Number of nodes with available pods: 2 Mar 18 14:32:55.222: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 18 14:32:55.250: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:32:55.250: INFO: Wrong image for pod: daemon-set-mzx9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:32:55.268: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:32:56.272: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:32:56.272: INFO: Wrong image for pod: daemon-set-mzx9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:32:56.276: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:32:57.273: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:32:57.273: INFO: Wrong image for pod: daemon-set-mzx9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:32:57.277: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:32:58.272: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:32:58.272: INFO: Wrong image for pod: daemon-set-mzx9h. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:32:58.272: INFO: Pod daemon-set-mzx9h is not available Mar 18 14:32:58.291: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:32:59.273: INFO: Pod daemon-set-4vgcb is not available Mar 18 14:32:59.273: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:32:59.277: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:00.271: INFO: Pod daemon-set-4vgcb is not available Mar 18 14:33:00.271: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:33:00.274: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:01.272: INFO: Pod daemon-set-4vgcb is not available Mar 18 14:33:01.272: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:33:01.274: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:02.272: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:33:02.275: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:03.273: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:33:03.273: INFO: Pod daemon-set-gw2xs is not available Mar 18 14:33:03.277: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:04.276: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:33:04.276: INFO: Pod daemon-set-gw2xs is not available Mar 18 14:33:04.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:05.274: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:33:05.274: INFO: Pod daemon-set-gw2xs is not available Mar 18 14:33:05.278: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:06.277: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:33:06.277: INFO: Pod daemon-set-gw2xs is not available Mar 18 14:33:06.280: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:07.272: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:33:07.272: INFO: Pod daemon-set-gw2xs is not available Mar 18 14:33:07.276: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:08.272: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:33:08.272: INFO: Pod daemon-set-gw2xs is not available Mar 18 14:33:08.283: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:09.273: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:33:09.273: INFO: Pod daemon-set-gw2xs is not available Mar 18 14:33:09.277: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:10.272: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:33:10.272: INFO: Pod daemon-set-gw2xs is not available Mar 18 14:33:10.276: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:11.273: INFO: Wrong image for pod: daemon-set-gw2xs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 18 14:33:11.273: INFO: Pod daemon-set-gw2xs is not available Mar 18 14:33:11.276: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:12.273: INFO: Pod daemon-set-rdzld is not available Mar 18 14:33:12.276: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 18 14:33:12.279: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:12.282: INFO: Number of nodes with available pods: 1 Mar 18 14:33:12.282: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:33:13.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:13.289: INFO: Number of nodes with available pods: 1 Mar 18 14:33:13.289: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:33:14.298: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:14.318: INFO: Number of nodes with available pods: 1 Mar 18 14:33:14.319: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:33:15.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:15.290: INFO: Number of nodes with available pods: 1 Mar 18 14:33:15.290: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:33:16.287: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:33:16.290: INFO: Number of nodes with available pods: 2 Mar 18 14:33:16.291: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7766, will wait for the garbage collector to delete the pods Mar 18 14:33:16.389: INFO: Deleting DaemonSet.extensions daemon-set took: 7.169315ms Mar 18 14:33:16.690: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.295899ms Mar 18 14:33:22.293: INFO: Number of nodes with available pods: 0 Mar 18 14:33:22.293: INFO: Number of running nodes: 0, number of available pods: 0 Mar 18 14:33:22.296: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7766/daemonsets","resourceVersion":"534707"},"items":null} Mar 18 14:33:22.299: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7766/pods","resourceVersion":"534707"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:33:22.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7766" for this suite. Mar 18 14:33:28.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:33:28.438: INFO: namespace daemonsets-7766 deletion completed in 6.107455952s • [SLOW TEST:37.325 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:33:28.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-51948b3b-3274-474b-b208-c4a8a0fd8e29 STEP: Creating a pod to test consume configMaps Mar 18 14:33:28.510: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50cfc339-ab43-44f7-8c47-eab8fc8bc027" in namespace "projected-315" to be "success or failure" Mar 18 14:33:28.520: INFO: Pod "pod-projected-configmaps-50cfc339-ab43-44f7-8c47-eab8fc8bc027": Phase="Pending", Reason="", readiness=false. Elapsed: 9.792518ms Mar 18 14:33:30.524: INFO: Pod "pod-projected-configmaps-50cfc339-ab43-44f7-8c47-eab8fc8bc027": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014158098s Mar 18 14:33:32.529: INFO: Pod "pod-projected-configmaps-50cfc339-ab43-44f7-8c47-eab8fc8bc027": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01849596s STEP: Saw pod success Mar 18 14:33:32.529: INFO: Pod "pod-projected-configmaps-50cfc339-ab43-44f7-8c47-eab8fc8bc027" satisfied condition "success or failure" Mar 18 14:33:32.532: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-50cfc339-ab43-44f7-8c47-eab8fc8bc027 container projected-configmap-volume-test: STEP: delete the pod Mar 18 14:33:32.551: INFO: Waiting for pod pod-projected-configmaps-50cfc339-ab43-44f7-8c47-eab8fc8bc027 to disappear Mar 18 14:33:32.562: INFO: Pod pod-projected-configmaps-50cfc339-ab43-44f7-8c47-eab8fc8bc027 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:33:32.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-315" for this suite. Mar 18 14:33:38.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:33:38.656: INFO: namespace projected-315 deletion completed in 6.0908787s • [SLOW TEST:10.217 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:33:38.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-91b31af6-bfe5-43ac-9506-f56205980ba6 in namespace container-probe-2835 Mar 18 14:33:42.735: INFO: Started pod busybox-91b31af6-bfe5-43ac-9506-f56205980ba6 in namespace container-probe-2835 STEP: checking the pod's current state and verifying that restartCount is present Mar 18 14:33:42.739: INFO: Initial restart count of pod busybox-91b31af6-bfe5-43ac-9506-f56205980ba6 is 0 Mar 18 14:34:28.839: INFO: Restart count of pod container-probe-2835/busybox-91b31af6-bfe5-43ac-9506-f56205980ba6 is now 1 (46.100136009s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:34:28.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2835" for this suite. Mar 18 14:34:34.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:34:34.993: INFO: namespace container-probe-2835 deletion completed in 6.100566442s • [SLOW TEST:56.337 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:34:34.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 18 14:34:39.106: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 18 14:34:54.209: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:34:54.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6706" for this suite. Mar 18 14:35:00.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:35:00.329: INFO: namespace pods-6706 deletion completed in 6.113369748s • [SLOW TEST:25.336 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:35:00.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Mar 18 14:35:00.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 18 14:35:00.521: INFO: stderr: "" Mar 18 14:35:00.521: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:35:00.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4248" for this suite. Mar 18 14:35:06.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:35:06.620: INFO: namespace kubectl-4248 deletion completed in 6.094837933s • [SLOW TEST:6.289 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:35:06.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 18 14:35:09.721: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:35:09.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2332" for this suite. Mar 18 14:35:15.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:35:15.840: INFO: namespace container-runtime-2332 deletion completed in 6.097631424s • [SLOW TEST:9.220 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:35:15.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-37473d94-bf3c-4b42-acca-3367dfca6dba STEP: Creating a pod to test consume configMaps Mar 18 14:35:15.907: INFO: Waiting up to 5m0s for pod "pod-configmaps-a484c32c-b1af-4c94-9892-05004f816431" in namespace "configmap-5815" to be "success or failure" Mar 18 14:35:15.923: INFO: Pod "pod-configmaps-a484c32c-b1af-4c94-9892-05004f816431": Phase="Pending", Reason="", readiness=false. Elapsed: 16.039814ms Mar 18 14:35:17.927: INFO: Pod "pod-configmaps-a484c32c-b1af-4c94-9892-05004f816431": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020306375s Mar 18 14:35:19.931: INFO: Pod "pod-configmaps-a484c32c-b1af-4c94-9892-05004f816431": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024199668s STEP: Saw pod success Mar 18 14:35:19.931: INFO: Pod "pod-configmaps-a484c32c-b1af-4c94-9892-05004f816431" satisfied condition "success or failure" Mar 18 14:35:19.934: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-a484c32c-b1af-4c94-9892-05004f816431 container configmap-volume-test: STEP: delete the pod Mar 18 14:35:19.955: INFO: Waiting for pod pod-configmaps-a484c32c-b1af-4c94-9892-05004f816431 to disappear Mar 18 14:35:19.958: INFO: Pod pod-configmaps-a484c32c-b1af-4c94-9892-05004f816431 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:35:19.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5815" for this suite. Mar 18 14:35:25.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:35:26.067: INFO: namespace configmap-5815 deletion completed in 6.105942065s • [SLOW TEST:10.226 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:35:26.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 18 14:35:26.303: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:26.318: INFO: Number of nodes with available pods: 0 Mar 18 14:35:26.318: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:27.329: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:27.332: INFO: Number of nodes with available pods: 0 Mar 18 14:35:27.332: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:28.335: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:28.339: INFO: Number of nodes with available pods: 0 Mar 18 14:35:28.339: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:29.323: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:29.327: INFO: Number of nodes with available pods: 0 Mar 18 14:35:29.327: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:30.322: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:30.324: INFO: Number of nodes with available pods: 2 Mar 18 14:35:30.324: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 18 14:35:30.395: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:30.399: INFO: Number of nodes with available pods: 1 Mar 18 14:35:30.399: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:31.404: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:31.408: INFO: Number of nodes with available pods: 1 Mar 18 14:35:31.408: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:32.558: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:32.637: INFO: Number of nodes with available pods: 1 Mar 18 14:35:32.637: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:33.404: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:33.409: INFO: Number of nodes with available pods: 1 Mar 18 14:35:33.409: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:34.404: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:34.408: INFO: Number of nodes with available pods: 1 Mar 18 14:35:34.408: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:35.404: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:35.408: INFO: Number of nodes with available pods: 1 Mar 18 14:35:35.408: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:36.419: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:36.422: INFO: Number of nodes with available pods: 1 Mar 18 14:35:36.422: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:37.404: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:37.407: INFO: Number of nodes with available pods: 1 Mar 18 14:35:37.407: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:38.419: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:38.423: INFO: Number of nodes with available pods: 1 Mar 18 14:35:38.423: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:39.404: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:39.408: INFO: Number of nodes with available pods: 1 Mar 18 14:35:39.408: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:40.407: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:40.410: INFO: Number of nodes with available pods: 1 Mar 18 14:35:40.410: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:41.404: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:41.408: INFO: Number of nodes with available pods: 1 Mar 18 14:35:41.408: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:42.403: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:42.406: INFO: Number of nodes with available pods: 1 Mar 18 14:35:42.406: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:43.403: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:43.406: INFO: Number of nodes with available pods: 1 Mar 18 14:35:43.406: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:44.403: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:44.407: INFO: Number of nodes with available pods: 1 Mar 18 14:35:44.407: INFO: Node iruya-worker is running more than one daemon pod Mar 18 14:35:45.404: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 18 14:35:45.408: INFO: Number of nodes with available pods: 2 Mar 18 14:35:45.408: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3796, will wait for the garbage collector to delete the pods Mar 18 14:35:45.469: INFO: Deleting DaemonSet.extensions daemon-set took: 5.835178ms Mar 18 14:35:45.769: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.323822ms Mar 18 14:35:51.973: INFO: Number of nodes with available pods: 0 Mar 18 14:35:51.973: INFO: Number of running nodes: 0, number of available pods: 0 Mar 18 14:35:51.976: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3796/daemonsets","resourceVersion":"535226"},"items":null} Mar 18 14:35:51.979: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3796/pods","resourceVersion":"535226"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:35:51.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3796" for this suite. Mar 18 14:35:58.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:35:58.097: INFO: namespace daemonsets-3796 deletion completed in 6.104710912s • [SLOW TEST:32.029 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:35:58.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-8ab9b763-ca16-4be9-ac49-d77ed5af23be STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:36:02.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6832" for this suite. Mar 18 14:36:24.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:36:24.316: INFO: namespace configmap-6832 deletion completed in 22.097484886s • [SLOW TEST:26.218 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:36:24.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6031 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 18 14:36:24.403: INFO: Found 0 stateful pods, waiting for 3 Mar 18 14:36:34.408: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 18 14:36:34.409: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 18 14:36:34.409: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 18 14:36:34.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 14:36:34.678: INFO: stderr: "I0318 14:36:34.553642 3705 log.go:172] (0xc000a4c000) (0xc00093c140) Create stream\nI0318 14:36:34.553738 3705 log.go:172] (0xc000a4c000) (0xc00093c140) Stream added, broadcasting: 1\nI0318 14:36:34.556481 3705 log.go:172] (0xc000a4c000) Reply frame received for 1\nI0318 14:36:34.556515 3705 log.go:172] (0xc000a4c000) (0xc0000d81e0) Create stream\nI0318 14:36:34.556524 3705 log.go:172] (0xc000a4c000) (0xc0000d81e0) Stream added, broadcasting: 3\nI0318 14:36:34.557861 3705 log.go:172] (0xc000a4c000) Reply frame received for 3\nI0318 14:36:34.557924 3705 log.go:172] (0xc000a4c000) (0xc0002f0000) Create stream\nI0318 14:36:34.557951 3705 log.go:172] (0xc000a4c000) (0xc0002f0000) Stream added, broadcasting: 5\nI0318 14:36:34.558929 3705 log.go:172] (0xc000a4c000) Reply frame received for 5\nI0318 14:36:34.640003 3705 log.go:172] (0xc000a4c000) Data frame received for 5\nI0318 14:36:34.640033 3705 log.go:172] (0xc0002f0000) (5) Data frame handling\nI0318 14:36:34.640052 3705 log.go:172] (0xc0002f0000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0318 14:36:34.671425 3705 log.go:172] (0xc000a4c000) Data frame received for 5\nI0318 14:36:34.671445 3705 log.go:172] (0xc0002f0000) (5) Data frame handling\nI0318 14:36:34.671501 3705 log.go:172] (0xc000a4c000) Data frame received for 3\nI0318 14:36:34.671560 3705 log.go:172] (0xc0000d81e0) (3) Data frame handling\nI0318 14:36:34.671698 3705 log.go:172] (0xc0000d81e0) (3) Data frame sent\nI0318 14:36:34.671735 3705 log.go:172] (0xc000a4c000) Data frame received for 3\nI0318 14:36:34.671792 3705 log.go:172] (0xc0000d81e0) (3) Data frame handling\nI0318 14:36:34.673672 3705 log.go:172] (0xc000a4c000) Data frame received for 1\nI0318 14:36:34.673698 3705 log.go:172] (0xc00093c140) (1) Data frame handling\nI0318 14:36:34.673712 3705 log.go:172] (0xc00093c140) (1) Data frame sent\nI0318 14:36:34.673726 3705 log.go:172] (0xc000a4c000) (0xc00093c140) Stream removed, broadcasting: 1\nI0318 14:36:34.673746 3705 log.go:172] (0xc000a4c000) Go away received\nI0318 14:36:34.674192 3705 log.go:172] (0xc000a4c000) (0xc00093c140) Stream removed, broadcasting: 1\nI0318 14:36:34.674219 3705 log.go:172] (0xc000a4c000) (0xc0000d81e0) Stream removed, broadcasting: 3\nI0318 14:36:34.674231 3705 log.go:172] (0xc000a4c000) (0xc0002f0000) Stream removed, broadcasting: 5\n" Mar 18 14:36:34.678: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 14:36:34.678: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 18 14:36:44.769: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 18 14:36:54.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:36:55.012: INFO: stderr: "I0318 14:36:54.927960 3726 log.go:172] (0xc000130dc0) (0xc000128960) Create stream\nI0318 14:36:54.928031 3726 log.go:172] (0xc000130dc0) (0xc000128960) Stream added, broadcasting: 1\nI0318 14:36:54.930522 3726 log.go:172] (0xc000130dc0) Reply frame received for 1\nI0318 14:36:54.930567 3726 log.go:172] (0xc000130dc0) (0xc000934000) Create stream\nI0318 14:36:54.930581 3726 log.go:172] (0xc000130dc0) (0xc000934000) Stream added, broadcasting: 3\nI0318 14:36:54.931648 3726 log.go:172] (0xc000130dc0) Reply frame received for 3\nI0318 14:36:54.931682 3726 log.go:172] (0xc000130dc0) (0xc0009340a0) Create stream\nI0318 14:36:54.931694 3726 log.go:172] (0xc000130dc0) (0xc0009340a0) Stream added, broadcasting: 5\nI0318 14:36:54.932628 3726 log.go:172] (0xc000130dc0) Reply frame received for 5\nI0318 14:36:55.007954 3726 log.go:172] (0xc000130dc0) Data frame received for 3\nI0318 14:36:55.008003 3726 log.go:172] (0xc000130dc0) Data frame received for 5\nI0318 14:36:55.008030 3726 log.go:172] (0xc0009340a0) (5) Data frame handling\nI0318 14:36:55.008041 3726 log.go:172] (0xc0009340a0) (5) Data frame sent\nI0318 14:36:55.008048 3726 log.go:172] (0xc000130dc0) Data frame received for 5\nI0318 14:36:55.008055 3726 log.go:172] (0xc0009340a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0318 14:36:55.008070 3726 log.go:172] (0xc000934000) (3) Data frame handling\nI0318 14:36:55.008155 3726 log.go:172] (0xc000934000) (3) Data frame sent\nI0318 14:36:55.008190 3726 log.go:172] (0xc000130dc0) Data frame received for 3\nI0318 14:36:55.008202 3726 log.go:172] (0xc000934000) (3) Data frame handling\nI0318 14:36:55.009617 3726 log.go:172] (0xc000130dc0) Data frame received for 1\nI0318 14:36:55.009660 3726 log.go:172] (0xc000128960) (1) Data frame handling\nI0318 14:36:55.009694 3726 log.go:172] (0xc000128960) (1) Data frame sent\nI0318 14:36:55.009746 3726 log.go:172] (0xc000130dc0) (0xc000128960) Stream removed, broadcasting: 1\nI0318 14:36:55.009780 3726 log.go:172] (0xc000130dc0) Go away received\nI0318 14:36:55.010131 3726 log.go:172] (0xc000130dc0) (0xc000128960) Stream removed, broadcasting: 1\nI0318 14:36:55.010154 3726 log.go:172] (0xc000130dc0) (0xc000934000) Stream removed, broadcasting: 3\nI0318 14:36:55.010167 3726 log.go:172] (0xc000130dc0) (0xc0009340a0) Stream removed, broadcasting: 5\n" Mar 18 14:36:55.012: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 14:36:55.012: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 14:37:05.035: INFO: Waiting for StatefulSet statefulset-6031/ss2 to complete update Mar 18 14:37:05.035: INFO: Waiting for Pod statefulset-6031/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 18 14:37:05.035: INFO: Waiting for Pod statefulset-6031/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 18 14:37:15.042: INFO: Waiting for StatefulSet statefulset-6031/ss2 to complete update Mar 18 14:37:15.042: INFO: Waiting for Pod statefulset-6031/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 18 14:37:25.042: INFO: Waiting for StatefulSet statefulset-6031/ss2 to complete update STEP: Rolling back to a previous revision Mar 18 14:37:35.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 18 14:37:35.268: INFO: stderr: "I0318 14:37:35.179259 3747 log.go:172] (0xc000116e70) (0xc00054c6e0) Create stream\nI0318 14:37:35.179312 3747 log.go:172] (0xc000116e70) (0xc00054c6e0) Stream added, broadcasting: 1\nI0318 14:37:35.182828 3747 log.go:172] (0xc000116e70) Reply frame received for 1\nI0318 14:37:35.182874 3747 log.go:172] (0xc000116e70) (0xc00054c000) Create stream\nI0318 14:37:35.182884 3747 log.go:172] (0xc000116e70) (0xc00054c000) Stream added, broadcasting: 3\nI0318 14:37:35.183838 3747 log.go:172] (0xc000116e70) Reply frame received for 3\nI0318 14:37:35.183892 3747 log.go:172] (0xc000116e70) (0xc000616280) Create stream\nI0318 14:37:35.183915 3747 log.go:172] (0xc000116e70) (0xc000616280) Stream added, broadcasting: 5\nI0318 14:37:35.184873 3747 log.go:172] (0xc000116e70) Reply frame received for 5\nI0318 14:37:35.240902 3747 log.go:172] (0xc000116e70) Data frame received for 5\nI0318 14:37:35.240939 3747 log.go:172] (0xc000616280) (5) Data frame handling\nI0318 14:37:35.240962 3747 log.go:172] (0xc000616280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0318 14:37:35.263076 3747 log.go:172] (0xc000116e70) Data frame received for 3\nI0318 14:37:35.263099 3747 log.go:172] (0xc00054c000) (3) Data frame handling\nI0318 14:37:35.263109 3747 log.go:172] (0xc00054c000) (3) Data frame sent\nI0318 14:37:35.263131 3747 log.go:172] (0xc000116e70) Data frame received for 5\nI0318 14:37:35.263139 3747 log.go:172] (0xc000616280) (5) Data frame handling\nI0318 14:37:35.263465 3747 log.go:172] (0xc000116e70) Data frame received for 3\nI0318 14:37:35.263487 3747 log.go:172] (0xc00054c000) (3) Data frame handling\nI0318 14:37:35.264713 3747 log.go:172] (0xc000116e70) Data frame received for 1\nI0318 14:37:35.264729 3747 log.go:172] (0xc00054c6e0) (1) Data frame handling\nI0318 14:37:35.264803 3747 log.go:172] (0xc00054c6e0) (1) Data frame sent\nI0318 14:37:35.264817 3747 log.go:172] (0xc000116e70) (0xc00054c6e0) Stream removed, broadcasting: 1\nI0318 14:37:35.264876 3747 log.go:172] (0xc000116e70) Go away received\nI0318 14:37:35.265035 3747 log.go:172] (0xc000116e70) (0xc00054c6e0) Stream removed, broadcasting: 1\nI0318 14:37:35.265052 3747 log.go:172] (0xc000116e70) (0xc00054c000) Stream removed, broadcasting: 3\nI0318 14:37:35.265058 3747 log.go:172] (0xc000116e70) (0xc000616280) Stream removed, broadcasting: 5\n" Mar 18 14:37:35.269: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 18 14:37:35.269: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 18 14:37:45.321: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 18 14:37:55.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6031 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 18 14:37:55.531: INFO: stderr: "I0318 14:37:55.467635 3768 log.go:172] (0xc000380000) (0xc0008b4000) Create stream\nI0318 14:37:55.467689 3768 log.go:172] (0xc000380000) (0xc0008b4000) Stream added, broadcasting: 1\nI0318 14:37:55.470117 3768 log.go:172] (0xc000380000) Reply frame received for 1\nI0318 14:37:55.470163 3768 log.go:172] (0xc000380000) (0xc000398320) Create stream\nI0318 14:37:55.470183 3768 log.go:172] (0xc000380000) (0xc000398320) Stream added, broadcasting: 3\nI0318 14:37:55.471211 3768 log.go:172] (0xc000380000) Reply frame received for 3\nI0318 14:37:55.471268 3768 log.go:172] (0xc000380000) (0xc000024000) Create stream\nI0318 14:37:55.471284 3768 log.go:172] (0xc000380000) (0xc000024000) Stream added, broadcasting: 5\nI0318 14:37:55.472222 3768 log.go:172] (0xc000380000) Reply frame received for 5\nI0318 14:37:55.524262 3768 log.go:172] (0xc000380000) Data frame received for 5\nI0318 14:37:55.524301 3768 log.go:172] (0xc000024000) (5) Data frame handling\nI0318 14:37:55.524315 3768 log.go:172] (0xc000024000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0318 14:37:55.524353 3768 log.go:172] (0xc000380000) Data frame received for 3\nI0318 14:37:55.524388 3768 log.go:172] (0xc000398320) (3) Data frame handling\nI0318 14:37:55.524430 3768 log.go:172] (0xc000398320) (3) Data frame sent\nI0318 14:37:55.524478 3768 log.go:172] (0xc000380000) Data frame received for 5\nI0318 14:37:55.524545 3768 log.go:172] (0xc000024000) (5) Data frame handling\nI0318 14:37:55.524578 3768 log.go:172] (0xc000380000) Data frame received for 3\nI0318 14:37:55.524606 3768 log.go:172] (0xc000398320) (3) Data frame handling\nI0318 14:37:55.526464 3768 log.go:172] (0xc000380000) Data frame received for 1\nI0318 14:37:55.526505 3768 log.go:172] (0xc0008b4000) (1) Data frame handling\nI0318 14:37:55.526546 3768 log.go:172] (0xc0008b4000) (1) Data frame sent\nI0318 14:37:55.526569 3768 log.go:172] (0xc000380000) (0xc0008b4000) Stream removed, broadcasting: 1\nI0318 14:37:55.526600 3768 log.go:172] (0xc000380000) Go away received\nI0318 14:37:55.526972 3768 log.go:172] (0xc000380000) (0xc0008b4000) Stream removed, broadcasting: 1\nI0318 14:37:55.527001 3768 log.go:172] (0xc000380000) (0xc000398320) Stream removed, broadcasting: 3\nI0318 14:37:55.527017 3768 log.go:172] (0xc000380000) (0xc000024000) Stream removed, broadcasting: 5\n" Mar 18 14:37:55.531: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 18 14:37:55.531: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 18 14:38:15.553: INFO: Waiting for StatefulSet statefulset-6031/ss2 to complete update Mar 18 14:38:15.553: INFO: Waiting for Pod statefulset-6031/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 18 14:38:25.561: INFO: Deleting all statefulset in ns statefulset-6031 Mar 18 14:38:25.563: INFO: Scaling statefulset ss2 to 0 Mar 18 14:38:45.580: INFO: Waiting for statefulset status.replicas updated to 0 Mar 18 14:38:45.583: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:38:45.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6031" for this suite. Mar 18 14:38:51.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:38:51.706: INFO: namespace statefulset-6031 deletion completed in 6.110527937s • [SLOW TEST:147.387 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:38:51.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 18 14:38:51.783: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a881dcb-fd48-4552-bc8d-223bf52f93ce" in namespace "downward-api-2301" to be "success or failure" Mar 18 14:38:51.786: INFO: Pod "downwardapi-volume-1a881dcb-fd48-4552-bc8d-223bf52f93ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.930243ms Mar 18 14:38:53.792: INFO: Pod "downwardapi-volume-1a881dcb-fd48-4552-bc8d-223bf52f93ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008687711s Mar 18 14:38:55.796: INFO: Pod "downwardapi-volume-1a881dcb-fd48-4552-bc8d-223bf52f93ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013091538s STEP: Saw pod success Mar 18 14:38:55.796: INFO: Pod "downwardapi-volume-1a881dcb-fd48-4552-bc8d-223bf52f93ce" satisfied condition "success or failure" Mar 18 14:38:55.800: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1a881dcb-fd48-4552-bc8d-223bf52f93ce container client-container: STEP: delete the pod Mar 18 14:38:55.860: INFO: Waiting for pod downwardapi-volume-1a881dcb-fd48-4552-bc8d-223bf52f93ce to disappear Mar 18 14:38:55.870: INFO: Pod downwardapi-volume-1a881dcb-fd48-4552-bc8d-223bf52f93ce no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:38:55.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2301" for this suite. Mar 18 14:39:01.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:39:01.956: INFO: namespace downward-api-2301 deletion completed in 6.082986245s • [SLOW TEST:10.250 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:39:01.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-zgc7 STEP: Creating a pod to test atomic-volume-subpath Mar 18 14:39:02.046: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zgc7" in namespace "subpath-6878" to be "success or failure" Mar 18 14:39:02.139: INFO: Pod "pod-subpath-test-secret-zgc7": Phase="Pending", Reason="", readiness=false. Elapsed: 93.203948ms Mar 18 14:39:04.143: INFO: Pod "pod-subpath-test-secret-zgc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096960627s Mar 18 14:39:06.148: INFO: Pod "pod-subpath-test-secret-zgc7": Phase="Running", Reason="", readiness=true. Elapsed: 4.10137993s Mar 18 14:39:08.152: INFO: Pod "pod-subpath-test-secret-zgc7": Phase="Running", Reason="", readiness=true. Elapsed: 6.105302517s Mar 18 14:39:10.156: INFO: Pod "pod-subpath-test-secret-zgc7": Phase="Running", Reason="", readiness=true. Elapsed: 8.109563305s Mar 18 14:39:12.160: INFO: Pod "pod-subpath-test-secret-zgc7": Phase="Running", Reason="", readiness=true. Elapsed: 10.113356241s Mar 18 14:39:14.164: INFO: Pod "pod-subpath-test-secret-zgc7": Phase="Running", Reason="", readiness=true. Elapsed: 12.11744046s Mar 18 14:39:16.168: INFO: Pod "pod-subpath-test-secret-zgc7": Phase="Running", Reason="", readiness=true. Elapsed: 14.121936828s Mar 18 14:39:18.173: INFO: Pod "pod-subpath-test-secret-zgc7": Phase="Running", Reason="", readiness=true. Elapsed: 16.126306945s Mar 18 14:39:20.177: INFO: Pod "pod-subpath-test-secret-zgc7": Phase="Running", Reason="", readiness=true. Elapsed: 18.13049695s Mar 18 14:39:22.181: INFO: Pod "pod-subpath-test-secret-zgc7": Phase="Running", Reason="", readiness=true. Elapsed: 20.134263146s Mar 18 14:39:24.184: INFO: Pod "pod-subpath-test-secret-zgc7": Phase="Running", Reason="", readiness=true. Elapsed: 22.137643539s Mar 18 14:39:26.188: INFO: Pod "pod-subpath-test-secret-zgc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.142151539s STEP: Saw pod success Mar 18 14:39:26.188: INFO: Pod "pod-subpath-test-secret-zgc7" satisfied condition "success or failure" Mar 18 14:39:26.192: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-zgc7 container test-container-subpath-secret-zgc7: STEP: delete the pod Mar 18 14:39:26.228: INFO: Waiting for pod pod-subpath-test-secret-zgc7 to disappear Mar 18 14:39:26.265: INFO: Pod pod-subpath-test-secret-zgc7 no longer exists STEP: Deleting pod pod-subpath-test-secret-zgc7 Mar 18 14:39:26.265: INFO: Deleting pod "pod-subpath-test-secret-zgc7" in namespace "subpath-6878" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:39:26.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6878" for this suite. Mar 18 14:39:32.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:39:32.366: INFO: namespace subpath-6878 deletion completed in 6.095181585s • [SLOW TEST:30.410 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 18 14:39:32.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4640.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4640.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 18 14:39:38.497: INFO: DNS probes using dns-4640/dns-test-a8203723-defa-4701-88d5-9880a8223e02 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 18 14:39:38.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4640" for this suite. Mar 18 14:39:44.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 18 14:39:44.685: INFO: namespace dns-4640 deletion completed in 6.138943421s • [SLOW TEST:12.318 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSMar 18 14:39:44.685: INFO: Running AfterSuite actions on all nodes Mar 18 14:39:44.685: INFO: Running AfterSuite actions on node 1 Mar 18 14:39:44.685: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6242.829 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS