I0313 12:55:25.081986 6 e2e.go:243] Starting e2e run "7613ec7f-fee1-41dc-b308-b26ac2430ce3" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584104124 - Will randomize all specs Will run 215 of 4412 specs Mar 13 12:55:25.341: INFO: >>> kubeConfig: /root/.kube/config Mar 13 12:55:25.344: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 13 12:55:25.360: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 13 12:55:25.380: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 13 12:55:25.380: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 13 12:55:25.380: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 13 12:55:25.385: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 13 12:55:25.385: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 13 12:55:25.385: INFO: e2e test version: v1.15.10 Mar 13 12:55:25.386: INFO: kube-apiserver version: v1.15.7 S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:55:25.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Mar 13 12:55:25.468: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 12:55:25.473: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b83e8ae-758a-4ac8-a300-b751394317a8" in namespace "projected-2273" to be "success or failure" Mar 13 12:55:25.481: INFO: Pod "downwardapi-volume-7b83e8ae-758a-4ac8-a300-b751394317a8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.293487ms Mar 13 12:55:27.484: INFO: Pod "downwardapi-volume-7b83e8ae-758a-4ac8-a300-b751394317a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01094041s STEP: Saw pod success Mar 13 12:55:27.484: INFO: Pod "downwardapi-volume-7b83e8ae-758a-4ac8-a300-b751394317a8" satisfied condition "success or failure" Mar 13 12:55:27.488: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7b83e8ae-758a-4ac8-a300-b751394317a8 container client-container: STEP: delete the pod Mar 13 12:55:27.524: INFO: Waiting for pod downwardapi-volume-7b83e8ae-758a-4ac8-a300-b751394317a8 to disappear Mar 13 12:55:27.527: INFO: Pod downwardapi-volume-7b83e8ae-758a-4ac8-a300-b751394317a8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:55:27.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2273" for this suite. Mar 13 12:55:33.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:55:33.626: INFO: namespace projected-2273 deletion completed in 6.096091483s • [SLOW TEST:8.240 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:55:33.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 13 12:55:33.698: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7626,SelfLink:/api/v1/namespaces/watch-7626/configmaps/e2e-watch-test-watch-closed,UID:3b432722-7e68-406a-ad33-99e40b364da2,ResourceVersion:899469,Generation:0,CreationTimestamp:2020-03-13 12:55:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 13 12:55:33.699: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7626,SelfLink:/api/v1/namespaces/watch-7626/configmaps/e2e-watch-test-watch-closed,UID:3b432722-7e68-406a-ad33-99e40b364da2,ResourceVersion:899470,Generation:0,CreationTimestamp:2020-03-13 12:55:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 13 12:55:33.709: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7626,SelfLink:/api/v1/namespaces/watch-7626/configmaps/e2e-watch-test-watch-closed,UID:3b432722-7e68-406a-ad33-99e40b364da2,ResourceVersion:899471,Generation:0,CreationTimestamp:2020-03-13 12:55:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 13 12:55:33.709: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7626,SelfLink:/api/v1/namespaces/watch-7626/configmaps/e2e-watch-test-watch-closed,UID:3b432722-7e68-406a-ad33-99e40b364da2,ResourceVersion:899472,Generation:0,CreationTimestamp:2020-03-13 12:55:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:55:33.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7626" for this suite. Mar 13 12:55:39.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:55:39.819: INFO: namespace watch-7626 deletion completed in 6.085660624s • [SLOW TEST:6.192 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:55:39.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 12:55:39.893: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 13 12:55:44.897: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 13 12:55:44.897: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 13 12:55:46.974: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2426,SelfLink:/apis/apps/v1/namespaces/deployment-2426/deployments/test-cleanup-deployment,UID:c0dc4fe7-09a7-4e1d-bd46-ed79b49d3bb5,ResourceVersion:899549,Generation:1,CreationTimestamp:2020-03-13 12:55:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-13 12:55:44 +0000 UTC 2020-03-13 12:55:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-13 12:55:46 +0000 UTC 2020-03-13 12:55:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 13 12:55:46.977: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2426,SelfLink:/apis/apps/v1/namespaces/deployment-2426/replicasets/test-cleanup-deployment-55bbcbc84c,UID:d0005246-383c-4eed-be05-528098e041ef,ResourceVersion:899538,Generation:1,CreationTimestamp:2020-03-13 12:55:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c0dc4fe7-09a7-4e1d-bd46-ed79b49d3bb5 0xc002622967 0xc002622968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 13 12:55:46.979: INFO: Pod "test-cleanup-deployment-55bbcbc84c-l2zml" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-l2zml,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2426,SelfLink:/api/v1/namespaces/deployment-2426/pods/test-cleanup-deployment-55bbcbc84c-l2zml,UID:3dff1c67-0460-43bb-85fd-e0e900951f84,ResourceVersion:899537,Generation:0,CreationTimestamp:2020-03-13 12:55:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c d0005246-383c-4eed-be05-528098e041ef 0xc002622f57 0xc002622f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-llzqv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-llzqv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-llzqv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002622fd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002622ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 12:55:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 12:55:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 12:55:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 12:55:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.1.165,StartTime:2020-03-13 12:55:45 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-13 12:55:46 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://6da13a3b449dc8aa97ce29bf53702e93eaf159ba2dc1b506c31aa6be9e96b95b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:55:46.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2426" for this suite. Mar 13 12:55:52.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:55:53.040: INFO: namespace deployment-2426 deletion completed in 6.057615668s • [SLOW TEST:13.221 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:55:53.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 13 12:55:55.119: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:55:55.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1644" for this suite. Mar 13 12:56:01.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:56:01.254: INFO: namespace container-runtime-1644 deletion completed in 6.092960364s • [SLOW TEST:8.214 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:56:01.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 12:56:01.328: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:56:02.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7473" for this suite. Mar 13 12:56:08.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:56:08.483: INFO: namespace custom-resource-definition-7473 deletion completed in 6.101212021s • [SLOW TEST:7.228 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:56:08.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-1b92493d-d074-4383-8742-4a122d1535cc STEP: Creating a pod to test consume configMaps Mar 13 12:56:08.607: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-62fc3bd3-8453-4d7b-b6cc-4cab62d0a71c" in namespace "projected-3206" to be "success or failure" Mar 13 12:56:08.636: INFO: Pod "pod-projected-configmaps-62fc3bd3-8453-4d7b-b6cc-4cab62d0a71c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.251925ms Mar 13 12:56:10.640: INFO: Pod "pod-projected-configmaps-62fc3bd3-8453-4d7b-b6cc-4cab62d0a71c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.03273075s STEP: Saw pod success Mar 13 12:56:10.640: INFO: Pod "pod-projected-configmaps-62fc3bd3-8453-4d7b-b6cc-4cab62d0a71c" satisfied condition "success or failure" Mar 13 12:56:10.642: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-62fc3bd3-8453-4d7b-b6cc-4cab62d0a71c container projected-configmap-volume-test: STEP: delete the pod Mar 13 12:56:10.680: INFO: Waiting for pod pod-projected-configmaps-62fc3bd3-8453-4d7b-b6cc-4cab62d0a71c to disappear Mar 13 12:56:10.696: INFO: Pod pod-projected-configmaps-62fc3bd3-8453-4d7b-b6cc-4cab62d0a71c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:56:10.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3206" for this suite. Mar 13 12:56:16.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:56:16.778: INFO: namespace projected-3206 deletion completed in 6.079127029s • [SLOW TEST:8.294 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:56:16.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 13 12:56:16.835: INFO: Waiting up to 5m0s for pod "downward-api-b3f1aa29-8c23-4d10-bb4f-d52f245a8c0a" in namespace "downward-api-157" to be "success or failure" Mar 13 12:56:16.846: INFO: Pod "downward-api-b3f1aa29-8c23-4d10-bb4f-d52f245a8c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.68371ms Mar 13 12:56:18.849: INFO: Pod "downward-api-b3f1aa29-8c23-4d10-bb4f-d52f245a8c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014265104s Mar 13 12:56:20.853: INFO: Pod "downward-api-b3f1aa29-8c23-4d10-bb4f-d52f245a8c0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017855031s STEP: Saw pod success Mar 13 12:56:20.853: INFO: Pod "downward-api-b3f1aa29-8c23-4d10-bb4f-d52f245a8c0a" satisfied condition "success or failure" Mar 13 12:56:20.856: INFO: Trying to get logs from node iruya-worker2 pod downward-api-b3f1aa29-8c23-4d10-bb4f-d52f245a8c0a container dapi-container: STEP: delete the pod Mar 13 12:56:20.902: INFO: Waiting for pod downward-api-b3f1aa29-8c23-4d10-bb4f-d52f245a8c0a to disappear Mar 13 12:56:20.906: INFO: Pod downward-api-b3f1aa29-8c23-4d10-bb4f-d52f245a8c0a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:56:20.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-157" for this suite. Mar 13 12:56:26.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:56:26.995: INFO: namespace downward-api-157 deletion completed in 6.086639401s • [SLOW TEST:10.217 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:56:26.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:56:29.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4781" for this suite. Mar 13 12:57:19.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:57:19.167: INFO: namespace kubelet-test-4781 deletion completed in 50.106485674s • [SLOW TEST:52.172 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:57:19.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 12:57:19.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-005a701e-414b-4f55-86f2-056685f09ae6" in namespace "downward-api-7299" to be "success or failure" Mar 13 12:57:19.224: INFO: Pod "downwardapi-volume-005a701e-414b-4f55-86f2-056685f09ae6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.451819ms Mar 13 12:57:21.228: INFO: Pod "downwardapi-volume-005a701e-414b-4f55-86f2-056685f09ae6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008375706s STEP: Saw pod success Mar 13 12:57:21.228: INFO: Pod "downwardapi-volume-005a701e-414b-4f55-86f2-056685f09ae6" satisfied condition "success or failure" Mar 13 12:57:21.231: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-005a701e-414b-4f55-86f2-056685f09ae6 container client-container: STEP: delete the pod Mar 13 12:57:21.250: INFO: Waiting for pod downwardapi-volume-005a701e-414b-4f55-86f2-056685f09ae6 to disappear Mar 13 12:57:21.254: INFO: Pod downwardapi-volume-005a701e-414b-4f55-86f2-056685f09ae6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:57:21.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7299" for this suite. Mar 13 12:57:27.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:57:27.329: INFO: namespace downward-api-7299 deletion completed in 6.070612398s • [SLOW TEST:8.161 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:57:27.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 13 12:57:27.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8513' Mar 13 12:57:28.911: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 13 12:57:28.911: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Mar 13 12:57:28.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8513' Mar 13 12:57:29.006: INFO: stderr: "" Mar 13 12:57:29.006: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:57:29.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8513" for this suite. Mar 13 12:57:35.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:57:35.139: INFO: namespace kubectl-8513 deletion completed in 6.130364589s • [SLOW TEST:7.810 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:57:35.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 12:57:35.192: INFO: Creating ReplicaSet my-hostname-basic-c33f06e1-ff3f-4060-95be-1bc54a2ebde9 Mar 13 12:57:35.209: INFO: Pod name my-hostname-basic-c33f06e1-ff3f-4060-95be-1bc54a2ebde9: Found 0 pods out of 1 Mar 13 12:57:40.213: INFO: Pod name my-hostname-basic-c33f06e1-ff3f-4060-95be-1bc54a2ebde9: Found 1 pods out of 1 Mar 13 12:57:40.213: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c33f06e1-ff3f-4060-95be-1bc54a2ebde9" is running Mar 13 12:57:40.216: INFO: Pod "my-hostname-basic-c33f06e1-ff3f-4060-95be-1bc54a2ebde9-82r4n" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 12:57:35 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 12:57:36 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 12:57:36 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 12:57:35 +0000 UTC Reason: Message:}]) Mar 13 12:57:40.216: INFO: Trying to dial the pod Mar 13 12:57:45.226: INFO: Controller my-hostname-basic-c33f06e1-ff3f-4060-95be-1bc54a2ebde9: Got expected result from replica 1 [my-hostname-basic-c33f06e1-ff3f-4060-95be-1bc54a2ebde9-82r4n]: "my-hostname-basic-c33f06e1-ff3f-4060-95be-1bc54a2ebde9-82r4n", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:57:45.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2008" for this suite. Mar 13 12:57:51.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:57:51.337: INFO: namespace replicaset-2008 deletion completed in 6.107444606s • [SLOW TEST:16.198 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:57:51.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-e93c8e57-1b50-4a47-a81c-f95a4b3e5240 STEP: Creating a pod to test consume configMaps Mar 13 12:57:51.424: INFO: Waiting up to 5m0s for pod "pod-configmaps-f871d47d-b5b4-4883-b23f-edf6c3acdccb" in namespace "configmap-7358" to be "success or failure" Mar 13 12:57:51.474: INFO: Pod "pod-configmaps-f871d47d-b5b4-4883-b23f-edf6c3acdccb": Phase="Pending", Reason="", readiness=false. Elapsed: 49.659994ms Mar 13 12:57:53.477: INFO: Pod "pod-configmaps-f871d47d-b5b4-4883-b23f-edf6c3acdccb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.052722631s STEP: Saw pod success Mar 13 12:57:53.477: INFO: Pod "pod-configmaps-f871d47d-b5b4-4883-b23f-edf6c3acdccb" satisfied condition "success or failure" Mar 13 12:57:53.479: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-f871d47d-b5b4-4883-b23f-edf6c3acdccb container configmap-volume-test: STEP: delete the pod Mar 13 12:57:53.517: INFO: Waiting for pod pod-configmaps-f871d47d-b5b4-4883-b23f-edf6c3acdccb to disappear Mar 13 12:57:53.524: INFO: Pod pod-configmaps-f871d47d-b5b4-4883-b23f-edf6c3acdccb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:57:53.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7358" for this suite. Mar 13 12:57:59.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:57:59.624: INFO: namespace configmap-7358 deletion completed in 6.095952889s • [SLOW TEST:8.287 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:57:59.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-366de11d-1fd5-4b6b-9641-06165e86cd82 STEP: Creating a pod to test consume secrets Mar 13 12:57:59.761: INFO: Waiting up to 5m0s for pod "pod-secrets-800d592f-56d8-4ba5-8f16-df827d8bf32b" in namespace "secrets-9425" to be "success or failure" Mar 13 12:57:59.766: INFO: Pod "pod-secrets-800d592f-56d8-4ba5-8f16-df827d8bf32b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.802676ms Mar 13 12:58:01.773: INFO: Pod "pod-secrets-800d592f-56d8-4ba5-8f16-df827d8bf32b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011477745s STEP: Saw pod success Mar 13 12:58:01.773: INFO: Pod "pod-secrets-800d592f-56d8-4ba5-8f16-df827d8bf32b" satisfied condition "success or failure" Mar 13 12:58:01.776: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-800d592f-56d8-4ba5-8f16-df827d8bf32b container secret-volume-test: STEP: delete the pod Mar 13 12:58:01.791: INFO: Waiting for pod pod-secrets-800d592f-56d8-4ba5-8f16-df827d8bf32b to disappear Mar 13 12:58:01.802: INFO: Pod pod-secrets-800d592f-56d8-4ba5-8f16-df827d8bf32b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:58:01.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9425" for this suite. Mar 13 12:58:07.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:58:07.927: INFO: namespace secrets-9425 deletion completed in 6.096307549s STEP: Destroying namespace "secret-namespace-1158" for this suite. Mar 13 12:58:13.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:58:14.040: INFO: namespace secret-namespace-1158 deletion completed in 6.112090607s • [SLOW TEST:14.415 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:58:14.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 13 12:58:14.091: INFO: Waiting up to 5m0s for pod "pod-d8d25dd8-9165-4704-9f17-f3980d6d5d33" in namespace "emptydir-7045" to be "success or failure" Mar 13 12:58:14.108: INFO: Pod "pod-d8d25dd8-9165-4704-9f17-f3980d6d5d33": Phase="Pending", Reason="", readiness=false. Elapsed: 17.117484ms Mar 13 12:58:16.136: INFO: Pod "pod-d8d25dd8-9165-4704-9f17-f3980d6d5d33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.044513125s STEP: Saw pod success Mar 13 12:58:16.136: INFO: Pod "pod-d8d25dd8-9165-4704-9f17-f3980d6d5d33" satisfied condition "success or failure" Mar 13 12:58:16.138: INFO: Trying to get logs from node iruya-worker pod pod-d8d25dd8-9165-4704-9f17-f3980d6d5d33 container test-container: STEP: delete the pod Mar 13 12:58:16.157: INFO: Waiting for pod pod-d8d25dd8-9165-4704-9f17-f3980d6d5d33 to disappear Mar 13 12:58:16.161: INFO: Pod pod-d8d25dd8-9165-4704-9f17-f3980d6d5d33 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:58:16.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7045" for this suite. Mar 13 12:58:22.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:58:22.271: INFO: namespace emptydir-7045 deletion completed in 6.107640016s • [SLOW TEST:8.231 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:58:22.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-74bf65d1-e2a8-48dc-aab2-d6f5365e2c62 STEP: Creating a pod to test consume configMaps Mar 13 12:58:22.334: INFO: Waiting up to 5m0s for pod "pod-configmaps-a941265b-02fd-40f6-817b-3dd8b68a2286" in namespace "configmap-7497" to be "success or failure" Mar 13 12:58:22.339: INFO: Pod "pod-configmaps-a941265b-02fd-40f6-817b-3dd8b68a2286": Phase="Pending", Reason="", readiness=false. Elapsed: 4.539402ms Mar 13 12:58:24.342: INFO: Pod "pod-configmaps-a941265b-02fd-40f6-817b-3dd8b68a2286": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00754094s STEP: Saw pod success Mar 13 12:58:24.342: INFO: Pod "pod-configmaps-a941265b-02fd-40f6-817b-3dd8b68a2286" satisfied condition "success or failure" Mar 13 12:58:24.343: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-a941265b-02fd-40f6-817b-3dd8b68a2286 container configmap-volume-test: STEP: delete the pod Mar 13 12:58:24.383: INFO: Waiting for pod pod-configmaps-a941265b-02fd-40f6-817b-3dd8b68a2286 to disappear Mar 13 12:58:24.386: INFO: Pod pod-configmaps-a941265b-02fd-40f6-817b-3dd8b68a2286 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:58:24.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7497" for this suite. Mar 13 12:58:30.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:58:30.482: INFO: namespace configmap-7497 deletion completed in 6.093397628s • [SLOW TEST:8.211 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:58:30.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 12:58:30.568: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2df5236-e0c4-4124-a52b-8ab454cc7557" in namespace "downward-api-6860" to be "success or failure" Mar 13 12:58:30.573: INFO: Pod "downwardapi-volume-d2df5236-e0c4-4124-a52b-8ab454cc7557": Phase="Pending", Reason="", readiness=false. Elapsed: 4.585372ms Mar 13 12:58:32.577: INFO: Pod "downwardapi-volume-d2df5236-e0c4-4124-a52b-8ab454cc7557": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008484508s STEP: Saw pod success Mar 13 12:58:32.577: INFO: Pod "downwardapi-volume-d2df5236-e0c4-4124-a52b-8ab454cc7557" satisfied condition "success or failure" Mar 13 12:58:32.581: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d2df5236-e0c4-4124-a52b-8ab454cc7557 container client-container: STEP: delete the pod Mar 13 12:58:32.598: INFO: Waiting for pod downwardapi-volume-d2df5236-e0c4-4124-a52b-8ab454cc7557 to disappear Mar 13 12:58:32.603: INFO: Pod downwardapi-volume-d2df5236-e0c4-4124-a52b-8ab454cc7557 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:58:32.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6860" for this suite. Mar 13 12:58:38.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:58:38.717: INFO: namespace downward-api-6860 deletion completed in 6.110708621s • [SLOW TEST:8.234 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:58:38.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Mar 13 12:58:38.798: INFO: Waiting up to 5m0s for pod "var-expansion-69dbbbea-7627-4081-a6c1-65153aa9fe8a" in namespace "var-expansion-8648" to be "success or failure" Mar 13 12:58:38.803: INFO: Pod "var-expansion-69dbbbea-7627-4081-a6c1-65153aa9fe8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.520419ms Mar 13 12:58:40.806: INFO: Pod "var-expansion-69dbbbea-7627-4081-a6c1-65153aa9fe8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008098825s STEP: Saw pod success Mar 13 12:58:40.806: INFO: Pod "var-expansion-69dbbbea-7627-4081-a6c1-65153aa9fe8a" satisfied condition "success or failure" Mar 13 12:58:40.809: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-69dbbbea-7627-4081-a6c1-65153aa9fe8a container dapi-container: STEP: delete the pod Mar 13 12:58:40.838: INFO: Waiting for pod var-expansion-69dbbbea-7627-4081-a6c1-65153aa9fe8a to disappear Mar 13 12:58:40.844: INFO: Pod var-expansion-69dbbbea-7627-4081-a6c1-65153aa9fe8a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:58:40.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8648" for this suite. Mar 13 12:58:46.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:58:46.949: INFO: namespace var-expansion-8648 deletion completed in 6.101975407s • [SLOW TEST:8.232 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:58:46.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4871.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4871.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4871.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4871.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4871.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4871.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 13 12:58:51.113: INFO: DNS probes using dns-4871/dns-test-cb716f66-0c7d-436b-b0ae-5e4ae4c83873 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:58:51.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4871" for this suite. Mar 13 12:58:57.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:58:57.311: INFO: namespace dns-4871 deletion completed in 6.143407143s • [SLOW TEST:10.361 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:58:57.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ffb22cbb-7bb4-4a7a-97ab-d95499667f07 STEP: Creating a pod to test consume configMaps Mar 13 12:58:57.443: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a8aff9f2-bf44-4bec-9db9-a2341d71c07a" in namespace "projected-9426" to be "success or failure" Mar 13 12:58:57.459: INFO: Pod "pod-projected-configmaps-a8aff9f2-bf44-4bec-9db9-a2341d71c07a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.071871ms Mar 13 12:58:59.462: INFO: Pod "pod-projected-configmaps-a8aff9f2-bf44-4bec-9db9-a2341d71c07a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019228502s STEP: Saw pod success Mar 13 12:58:59.462: INFO: Pod "pod-projected-configmaps-a8aff9f2-bf44-4bec-9db9-a2341d71c07a" satisfied condition "success or failure" Mar 13 12:58:59.466: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-a8aff9f2-bf44-4bec-9db9-a2341d71c07a container projected-configmap-volume-test: STEP: delete the pod Mar 13 12:58:59.478: INFO: Waiting for pod pod-projected-configmaps-a8aff9f2-bf44-4bec-9db9-a2341d71c07a to disappear Mar 13 12:58:59.483: INFO: Pod pod-projected-configmaps-a8aff9f2-bf44-4bec-9db9-a2341d71c07a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:58:59.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9426" for this suite. Mar 13 12:59:05.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:59:05.587: INFO: namespace projected-9426 deletion completed in 6.100516465s • [SLOW TEST:8.276 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:59:05.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 12:59:05.641: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86924c8d-447f-414d-ba21-691f8e91034b" in namespace "projected-2402" to be "success or failure" Mar 13 12:59:05.645: INFO: Pod "downwardapi-volume-86924c8d-447f-414d-ba21-691f8e91034b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097702ms Mar 13 12:59:07.648: INFO: Pod "downwardapi-volume-86924c8d-447f-414d-ba21-691f8e91034b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007510281s STEP: Saw pod success Mar 13 12:59:07.648: INFO: Pod "downwardapi-volume-86924c8d-447f-414d-ba21-691f8e91034b" satisfied condition "success or failure" Mar 13 12:59:07.650: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-86924c8d-447f-414d-ba21-691f8e91034b container client-container: STEP: delete the pod Mar 13 12:59:07.707: INFO: Waiting for pod downwardapi-volume-86924c8d-447f-414d-ba21-691f8e91034b to disappear Mar 13 12:59:07.729: INFO: Pod downwardapi-volume-86924c8d-447f-414d-ba21-691f8e91034b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:59:07.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2402" for this suite. Mar 13 12:59:13.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:59:13.809: INFO: namespace projected-2402 deletion completed in 6.076820363s • [SLOW TEST:8.221 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:59:13.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 13 12:59:16.894: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:59:17.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9412" for this suite. Mar 13 12:59:33.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:59:34.054: INFO: namespace replicaset-9412 deletion completed in 16.096112752s • [SLOW TEST:20.245 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:59:34.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 13 12:59:34.126: INFO: Waiting up to 5m0s for pod "pod-3b4c791f-86b2-43e9-aa25-e32ba53870d8" in namespace "emptydir-66" to be "success or failure" Mar 13 12:59:34.162: INFO: Pod "pod-3b4c791f-86b2-43e9-aa25-e32ba53870d8": Phase="Pending", Reason="", readiness=false. Elapsed: 36.283274ms Mar 13 12:59:36.165: INFO: Pod "pod-3b4c791f-86b2-43e9-aa25-e32ba53870d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039361455s STEP: Saw pod success Mar 13 12:59:36.165: INFO: Pod "pod-3b4c791f-86b2-43e9-aa25-e32ba53870d8" satisfied condition "success or failure" Mar 13 12:59:36.184: INFO: Trying to get logs from node iruya-worker pod pod-3b4c791f-86b2-43e9-aa25-e32ba53870d8 container test-container: STEP: delete the pod Mar 13 12:59:36.204: INFO: Waiting for pod pod-3b4c791f-86b2-43e9-aa25-e32ba53870d8 to disappear Mar 13 12:59:36.208: INFO: Pod pod-3b4c791f-86b2-43e9-aa25-e32ba53870d8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:59:36.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-66" for this suite. Mar 13 12:59:42.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 12:59:42.290: INFO: namespace emptydir-66 deletion completed in 6.078816986s • [SLOW TEST:8.236 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 12:59:42.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 13 12:59:42.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-2481' Mar 13 12:59:42.379: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 13 12:59:42.379: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Mar 13 12:59:46.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2481' Mar 13 12:59:46.496: INFO: stderr: "" Mar 13 12:59:46.496: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 12:59:46.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2481" for this suite. Mar 13 13:00:08.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:00:08.604: INFO: namespace kubectl-2481 deletion completed in 22.105101993s • [SLOW TEST:26.314 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:00:08.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Mar 13 13:00:08.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-489 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 13 13:00:10.282: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0313 13:00:10.222745 129 log.go:172] (0xc000950580) (0xc0003688c0) Create stream\nI0313 13:00:10.222804 129 log.go:172] (0xc000950580) (0xc0003688c0) Stream added, broadcasting: 1\nI0313 13:00:10.228689 129 log.go:172] (0xc000950580) Reply frame received for 1\nI0313 13:00:10.228744 129 log.go:172] (0xc000950580) (0xc000368000) Create stream\nI0313 13:00:10.228757 129 log.go:172] (0xc000950580) (0xc000368000) Stream added, broadcasting: 3\nI0313 13:00:10.230709 129 log.go:172] (0xc000950580) Reply frame received for 3\nI0313 13:00:10.230769 129 log.go:172] (0xc000950580) (0xc0005ae140) Create stream\nI0313 13:00:10.230799 129 log.go:172] (0xc000950580) (0xc0005ae140) Stream added, broadcasting: 5\nI0313 13:00:10.232682 129 log.go:172] (0xc000950580) Reply frame received for 5\nI0313 13:00:10.232718 129 log.go:172] (0xc000950580) (0xc0003680a0) Create stream\nI0313 13:00:10.232729 129 log.go:172] (0xc000950580) (0xc0003680a0) Stream added, broadcasting: 7\nI0313 13:00:10.233914 129 log.go:172] (0xc000950580) Reply frame received for 7\nI0313 13:00:10.234068 129 log.go:172] (0xc000368000) (3) Writing data frame\nI0313 13:00:10.234194 129 log.go:172] (0xc000368000) (3) Writing data frame\nI0313 13:00:10.235180 129 log.go:172] (0xc000950580) Data frame received for 5\nI0313 13:00:10.235198 129 log.go:172] (0xc0005ae140) (5) Data frame handling\nI0313 13:00:10.235210 129 log.go:172] (0xc0005ae140) (5) Data frame sent\nI0313 13:00:10.236474 129 log.go:172] (0xc000950580) Data frame received for 5\nI0313 13:00:10.236487 129 log.go:172] (0xc0005ae140) (5) Data frame handling\nI0313 13:00:10.236497 129 log.go:172] (0xc0005ae140) (5) Data frame sent\nI0313 13:00:10.263554 129 log.go:172] (0xc000950580) Data frame received for 7\nI0313 13:00:10.263581 129 log.go:172] (0xc0003680a0) (7) Data frame handling\nI0313 13:00:10.263608 129 log.go:172] (0xc000950580) Data frame received for 5\nI0313 13:00:10.263629 129 log.go:172] (0xc0005ae140) (5) Data frame handling\nI0313 13:00:10.263852 129 log.go:172] (0xc000950580) Data frame received for 1\nI0313 13:00:10.263876 129 log.go:172] (0xc0003688c0) (1) Data frame handling\nI0313 13:00:10.263891 129 log.go:172] (0xc0003688c0) (1) Data frame sent\nI0313 13:00:10.263905 129 log.go:172] (0xc000950580) (0xc0003688c0) Stream removed, broadcasting: 1\nI0313 13:00:10.263941 129 log.go:172] (0xc000950580) (0xc000368000) Stream removed, broadcasting: 3\nI0313 13:00:10.263986 129 log.go:172] (0xc000950580) Go away received\nI0313 13:00:10.264057 129 log.go:172] (0xc000950580) (0xc0003688c0) Stream removed, broadcasting: 1\nI0313 13:00:10.264081 129 log.go:172] (0xc000950580) (0xc000368000) Stream removed, broadcasting: 3\nI0313 13:00:10.264089 129 log.go:172] (0xc000950580) (0xc0005ae140) Stream removed, broadcasting: 5\nI0313 13:00:10.264097 129 log.go:172] (0xc000950580) (0xc0003680a0) Stream removed, broadcasting: 7\n" Mar 13 13:00:10.282: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:00:12.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-489" for this suite. Mar 13 13:00:18.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:00:18.372: INFO: namespace kubectl-489 deletion completed in 6.082578865s • [SLOW TEST:9.767 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:00:18.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-4cc4a82b-46c8-4254-bcdc-bb0d0b6e879c STEP: Creating a pod to test consume secrets Mar 13 13:00:18.432: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-874134c8-9a3a-4a90-abe8-92b16af10334" in namespace "projected-3057" to be "success or failure" Mar 13 13:00:18.450: INFO: Pod "pod-projected-secrets-874134c8-9a3a-4a90-abe8-92b16af10334": Phase="Pending", Reason="", readiness=false. Elapsed: 17.841757ms Mar 13 13:00:20.453: INFO: Pod "pod-projected-secrets-874134c8-9a3a-4a90-abe8-92b16af10334": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020958812s STEP: Saw pod success Mar 13 13:00:20.453: INFO: Pod "pod-projected-secrets-874134c8-9a3a-4a90-abe8-92b16af10334" satisfied condition "success or failure" Mar 13 13:00:20.455: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-874134c8-9a3a-4a90-abe8-92b16af10334 container projected-secret-volume-test: STEP: delete the pod Mar 13 13:00:20.479: INFO: Waiting for pod pod-projected-secrets-874134c8-9a3a-4a90-abe8-92b16af10334 to disappear Mar 13 13:00:20.484: INFO: Pod pod-projected-secrets-874134c8-9a3a-4a90-abe8-92b16af10334 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:00:20.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3057" for this suite. Mar 13 13:00:26.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:00:26.551: INFO: namespace projected-3057 deletion completed in 6.064093418s • [SLOW TEST:8.178 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:00:26.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1137.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1137.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1137.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1137.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1137.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1137.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1137.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1137.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1137.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1137.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1137.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 86.246.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.246.86_udp@PTR;check="$$(dig +tcp +noall +answer +search 86.246.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.246.86_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1137.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1137.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1137.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1137.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1137.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1137.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1137.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1137.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1137.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1137.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1137.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 86.246.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.246.86_udp@PTR;check="$$(dig +tcp +noall +answer +search 86.246.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.246.86_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 13 13:00:30.711: INFO: Unable to read wheezy_udp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:30.713: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:30.716: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:30.718: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:30.737: INFO: Unable to read jessie_udp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:30.739: INFO: Unable to read jessie_tcp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:30.741: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:30.743: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:30.757: INFO: Lookups using dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f failed for: [wheezy_udp@dns-test-service.dns-1137.svc.cluster.local wheezy_tcp@dns-test-service.dns-1137.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local jessie_udp@dns-test-service.dns-1137.svc.cluster.local jessie_tcp@dns-test-service.dns-1137.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local] Mar 13 13:00:35.761: INFO: Unable to read wheezy_udp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:35.763: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:35.765: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:35.767: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:35.781: INFO: Unable to read jessie_udp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:35.783: INFO: Unable to read jessie_tcp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:35.785: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:35.787: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:35.831: INFO: Lookups using dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f failed for: [wheezy_udp@dns-test-service.dns-1137.svc.cluster.local wheezy_tcp@dns-test-service.dns-1137.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local jessie_udp@dns-test-service.dns-1137.svc.cluster.local jessie_tcp@dns-test-service.dns-1137.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local] Mar 13 13:00:40.761: INFO: Unable to read wheezy_udp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:40.764: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:40.767: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:40.770: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:40.790: INFO: Unable to read jessie_udp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:40.793: INFO: Unable to read jessie_tcp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:40.796: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:40.799: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:40.814: INFO: Lookups using dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f failed for: [wheezy_udp@dns-test-service.dns-1137.svc.cluster.local wheezy_tcp@dns-test-service.dns-1137.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local jessie_udp@dns-test-service.dns-1137.svc.cluster.local jessie_tcp@dns-test-service.dns-1137.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local] Mar 13 13:00:45.760: INFO: Unable to read wheezy_udp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:45.762: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:45.764: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:45.765: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:45.782: INFO: Unable to read jessie_udp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:45.784: INFO: Unable to read jessie_tcp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:45.786: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:45.788: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:45.816: INFO: Lookups using dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f failed for: [wheezy_udp@dns-test-service.dns-1137.svc.cluster.local wheezy_tcp@dns-test-service.dns-1137.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local jessie_udp@dns-test-service.dns-1137.svc.cluster.local jessie_tcp@dns-test-service.dns-1137.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local] Mar 13 13:00:50.762: INFO: Unable to read wheezy_udp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:50.765: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:50.768: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:50.771: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:50.792: INFO: Unable to read jessie_udp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:50.795: INFO: Unable to read jessie_tcp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:50.799: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:50.802: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:50.818: INFO: Lookups using dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f failed for: [wheezy_udp@dns-test-service.dns-1137.svc.cluster.local wheezy_tcp@dns-test-service.dns-1137.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local jessie_udp@dns-test-service.dns-1137.svc.cluster.local jessie_tcp@dns-test-service.dns-1137.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local] Mar 13 13:00:55.761: INFO: Unable to read wheezy_udp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:55.763: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:55.765: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:55.768: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:55.784: INFO: Unable to read jessie_udp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:55.786: INFO: Unable to read jessie_tcp@dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:55.788: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:55.790: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local from pod dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f: the server could not find the requested resource (get pods dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f) Mar 13 13:00:55.803: INFO: Lookups using dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f failed for: [wheezy_udp@dns-test-service.dns-1137.svc.cluster.local wheezy_tcp@dns-test-service.dns-1137.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local jessie_udp@dns-test-service.dns-1137.svc.cluster.local jessie_tcp@dns-test-service.dns-1137.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1137.svc.cluster.local] Mar 13 13:01:00.827: INFO: DNS probes using dns-1137/dns-test-7774a6f2-c506-4ec8-8c62-b477ad68489f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:01:01.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1137" for this suite. Mar 13 13:01:07.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:01:07.091: INFO: namespace dns-1137 deletion completed in 6.072045121s • [SLOW TEST:40.540 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:01:07.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 13:01:07.181: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1cfc86f-3c75-48cf-98d7-d4eb2401f99c" in namespace "downward-api-2626" to be "success or failure" Mar 13 13:01:07.186: INFO: Pod "downwardapi-volume-a1cfc86f-3c75-48cf-98d7-d4eb2401f99c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.984394ms Mar 13 13:01:09.189: INFO: Pod "downwardapi-volume-a1cfc86f-3c75-48cf-98d7-d4eb2401f99c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008342306s Mar 13 13:01:11.194: INFO: Pod "downwardapi-volume-a1cfc86f-3c75-48cf-98d7-d4eb2401f99c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012700977s STEP: Saw pod success Mar 13 13:01:11.194: INFO: Pod "downwardapi-volume-a1cfc86f-3c75-48cf-98d7-d4eb2401f99c" satisfied condition "success or failure" Mar 13 13:01:11.197: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a1cfc86f-3c75-48cf-98d7-d4eb2401f99c container client-container: STEP: delete the pod Mar 13 13:01:11.234: INFO: Waiting for pod downwardapi-volume-a1cfc86f-3c75-48cf-98d7-d4eb2401f99c to disappear Mar 13 13:01:11.243: INFO: Pod downwardapi-volume-a1cfc86f-3c75-48cf-98d7-d4eb2401f99c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:01:11.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2626" for this suite. Mar 13 13:01:17.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:01:17.317: INFO: namespace downward-api-2626 deletion completed in 6.07067842s • [SLOW TEST:10.226 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:01:17.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 13 13:01:21.472: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 13 13:01:21.484: INFO: Pod pod-with-prestop-http-hook still exists Mar 13 13:01:23.484: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 13 13:01:23.487: INFO: Pod pod-with-prestop-http-hook still exists Mar 13 13:01:25.484: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 13 13:01:25.487: INFO: Pod pod-with-prestop-http-hook still exists Mar 13 13:01:27.484: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 13 13:01:27.486: INFO: Pod pod-with-prestop-http-hook still exists Mar 13 13:01:29.484: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 13 13:01:29.487: INFO: Pod pod-with-prestop-http-hook still exists Mar 13 13:01:31.484: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 13 13:01:31.487: INFO: Pod pod-with-prestop-http-hook still exists Mar 13 13:01:33.484: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 13 13:01:33.486: INFO: Pod pod-with-prestop-http-hook still exists Mar 13 13:01:35.484: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 13 13:01:35.487: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:01:35.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1318" for this suite. Mar 13 13:01:57.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:01:57.620: INFO: namespace container-lifecycle-hook-1318 deletion completed in 22.123160928s • [SLOW TEST:40.303 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:01:57.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0313 13:02:28.212241 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 13 13:02:28.212: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:02:28.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6184" for this suite. Mar 13 13:02:34.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:02:34.314: INFO: namespace gc-6184 deletion completed in 6.100396449s • [SLOW TEST:36.694 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:02:34.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-c7dc1738-c526-4cf9-9efd-d98fe31bf8d8 STEP: Creating a pod to test consume configMaps Mar 13 13:02:34.375: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc16130f-e0b1-492a-81cb-cb9777962791" in namespace "configmap-6507" to be "success or failure" Mar 13 13:02:34.391: INFO: Pod "pod-configmaps-dc16130f-e0b1-492a-81cb-cb9777962791": Phase="Pending", Reason="", readiness=false. Elapsed: 16.166702ms Mar 13 13:02:36.393: INFO: Pod "pod-configmaps-dc16130f-e0b1-492a-81cb-cb9777962791": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018760082s STEP: Saw pod success Mar 13 13:02:36.393: INFO: Pod "pod-configmaps-dc16130f-e0b1-492a-81cb-cb9777962791" satisfied condition "success or failure" Mar 13 13:02:36.395: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-dc16130f-e0b1-492a-81cb-cb9777962791 container configmap-volume-test: STEP: delete the pod Mar 13 13:02:36.463: INFO: Waiting for pod pod-configmaps-dc16130f-e0b1-492a-81cb-cb9777962791 to disappear Mar 13 13:02:36.468: INFO: Pod pod-configmaps-dc16130f-e0b1-492a-81cb-cb9777962791 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:02:36.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6507" for this suite. Mar 13 13:02:42.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:02:42.585: INFO: namespace configmap-6507 deletion completed in 6.114618174s • [SLOW TEST:8.271 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:02:42.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-2212 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2212 to expose endpoints map[] Mar 13 13:02:42.673: INFO: Get endpoints failed (8.097795ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 13 13:02:43.676: INFO: successfully validated that service multi-endpoint-test in namespace services-2212 exposes endpoints map[] (1.011321145s elapsed) STEP: Creating pod pod1 in namespace services-2212 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2212 to expose endpoints map[pod1:[100]] Mar 13 13:02:45.708: INFO: successfully validated that service multi-endpoint-test in namespace services-2212 exposes endpoints map[pod1:[100]] (2.022918047s elapsed) STEP: Creating pod pod2 in namespace services-2212 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2212 to expose endpoints map[pod1:[100] pod2:[101]] Mar 13 13:02:47.751: INFO: successfully validated that service multi-endpoint-test in namespace services-2212 exposes endpoints map[pod1:[100] pod2:[101]] (2.037571584s elapsed) STEP: Deleting pod pod1 in namespace services-2212 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2212 to expose endpoints map[pod2:[101]] Mar 13 13:02:48.830: INFO: successfully validated that service multi-endpoint-test in namespace services-2212 exposes endpoints map[pod2:[101]] (1.075566841s elapsed) STEP: Deleting pod pod2 in namespace services-2212 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2212 to expose endpoints map[] Mar 13 13:02:48.841: INFO: successfully validated that service multi-endpoint-test in namespace services-2212 exposes endpoints map[] (5.085665ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:02:48.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2212" for this suite. Mar 13 13:03:10.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:03:10.973: INFO: namespace services-2212 deletion completed in 22.08871989s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:28.387 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:03:10.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:03:16.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7009" for this suite. Mar 13 13:03:38.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:03:38.171: INFO: namespace replication-controller-7009 deletion completed in 22.079453345s • [SLOW TEST:27.198 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:03:38.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 13:03:38.209: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 13 13:03:40.253: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:03:41.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-514" for this suite. Mar 13 13:03:47.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:03:47.343: INFO: namespace replication-controller-514 deletion completed in 6.060338025s • [SLOW TEST:9.172 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:03:47.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 13:03:47.448: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.931359ms) Mar 13 13:03:47.451: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.133329ms) Mar 13 13:03:47.453: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.104455ms) Mar 13 13:03:47.455: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.240405ms) Mar 13 13:03:47.457: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.323561ms) Mar 13 13:03:47.460: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.154224ms) Mar 13 13:03:47.461: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.951232ms) Mar 13 13:03:47.463: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.933637ms) Mar 13 13:03:47.465: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.93904ms) Mar 13 13:03:47.467: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.865022ms) Mar 13 13:03:47.469: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.836465ms) Mar 13 13:03:47.471: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.012758ms) Mar 13 13:03:47.473: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.887971ms) Mar 13 13:03:47.475: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.901059ms) Mar 13 13:03:47.477: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.989131ms) Mar 13 13:03:47.479: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.918684ms) Mar 13 13:03:47.481: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.845762ms) Mar 13 13:03:47.483: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.793374ms) Mar 13 13:03:47.485: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.893639ms) Mar 13 13:03:47.486: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.810239ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:03:47.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2293" for this suite. Mar 13 13:03:53.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:03:53.550: INFO: namespace proxy-2293 deletion completed in 6.061778999s • [SLOW TEST:6.208 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:03:53.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 13 13:03:53.589: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 13 13:03:53.596: INFO: Waiting for terminating namespaces to be deleted... Mar 13 13:03:53.598: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 13 13:03:53.601: INFO: kindnet-9jdkr from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 13 13:03:53.601: INFO: Container kindnet-cni ready: true, restart count 0 Mar 13 13:03:53.601: INFO: kube-proxy-nf96r from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 13 13:03:53.601: INFO: Container kube-proxy ready: true, restart count 0 Mar 13 13:03:53.601: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 13 13:03:53.604: INFO: kindnet-d7zdc from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 13 13:03:53.604: INFO: Container kindnet-cni ready: true, restart count 0 Mar 13 13:03:53.604: INFO: kube-proxy-clpmt from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 13 13:03:53.604: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fbdec5759c65ae], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:03:54.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3642" for this suite. Mar 13 13:04:00.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:04:00.716: INFO: namespace sched-pred-3642 deletion completed in 6.096412687s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.166 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:04:00.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-aafac090-5865-428c-b2f2-c6f27c68ae5d STEP: Creating a pod to test consume configMaps Mar 13 13:04:00.769: INFO: Waiting up to 5m0s for pod "pod-configmaps-e956864b-a91a-470c-9a3b-90ef893da0d3" in namespace "configmap-2946" to be "success or failure" Mar 13 13:04:00.785: INFO: Pod "pod-configmaps-e956864b-a91a-470c-9a3b-90ef893da0d3": Phase="Pending", Reason="", readiness=false. Elapsed: 15.602126ms Mar 13 13:04:02.788: INFO: Pod "pod-configmaps-e956864b-a91a-470c-9a3b-90ef893da0d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018822176s STEP: Saw pod success Mar 13 13:04:02.788: INFO: Pod "pod-configmaps-e956864b-a91a-470c-9a3b-90ef893da0d3" satisfied condition "success or failure" Mar 13 13:04:02.790: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e956864b-a91a-470c-9a3b-90ef893da0d3 container configmap-volume-test: STEP: delete the pod Mar 13 13:04:02.808: INFO: Waiting for pod pod-configmaps-e956864b-a91a-470c-9a3b-90ef893da0d3 to disappear Mar 13 13:04:02.823: INFO: Pod pod-configmaps-e956864b-a91a-470c-9a3b-90ef893da0d3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:04:02.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2946" for this suite. Mar 13 13:04:08.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:04:08.953: INFO: namespace configmap-2946 deletion completed in 6.123223907s • [SLOW TEST:8.237 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:04:08.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 13 13:04:09.016: INFO: Waiting up to 5m0s for pod "pod-d750ff41-745d-42e7-af55-8812ef24decc" in namespace "emptydir-2817" to be "success or failure" Mar 13 13:04:09.032: INFO: Pod "pod-d750ff41-745d-42e7-af55-8812ef24decc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.740386ms Mar 13 13:04:11.035: INFO: Pod "pod-d750ff41-745d-42e7-af55-8812ef24decc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018828379s STEP: Saw pod success Mar 13 13:04:11.035: INFO: Pod "pod-d750ff41-745d-42e7-af55-8812ef24decc" satisfied condition "success or failure" Mar 13 13:04:11.037: INFO: Trying to get logs from node iruya-worker pod pod-d750ff41-745d-42e7-af55-8812ef24decc container test-container: STEP: delete the pod Mar 13 13:04:11.053: INFO: Waiting for pod pod-d750ff41-745d-42e7-af55-8812ef24decc to disappear Mar 13 13:04:11.071: INFO: Pod pod-d750ff41-745d-42e7-af55-8812ef24decc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:04:11.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2817" for this suite. Mar 13 13:04:17.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:04:17.139: INFO: namespace emptydir-2817 deletion completed in 6.064650524s • [SLOW TEST:8.185 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:04:17.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 13 13:04:25.236: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5208 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:04:25.236: INFO: >>> kubeConfig: /root/.kube/config I0313 13:04:25.264867 6 log.go:172] (0xc00109f080) (0xc002f79f40) Create stream I0313 13:04:25.264891 6 log.go:172] (0xc00109f080) (0xc002f79f40) Stream added, broadcasting: 1 I0313 13:04:25.266461 6 log.go:172] (0xc00109f080) Reply frame received for 1 I0313 13:04:25.266498 6 log.go:172] (0xc00109f080) (0xc002ed8000) Create stream I0313 13:04:25.266510 6 log.go:172] (0xc00109f080) (0xc002ed8000) Stream added, broadcasting: 3 I0313 13:04:25.267258 6 log.go:172] (0xc00109f080) Reply frame received for 3 I0313 13:04:25.267278 6 log.go:172] (0xc00109f080) (0xc002ed80a0) Create stream I0313 13:04:25.267286 6 log.go:172] (0xc00109f080) (0xc002ed80a0) Stream added, broadcasting: 5 I0313 13:04:25.267945 6 log.go:172] (0xc00109f080) Reply frame received for 5 I0313 13:04:25.324175 6 log.go:172] (0xc00109f080) Data frame received for 5 I0313 13:04:25.324196 6 log.go:172] (0xc002ed80a0) (5) Data frame handling I0313 13:04:25.324214 6 log.go:172] (0xc00109f080) Data frame received for 3 I0313 13:04:25.324229 6 log.go:172] (0xc002ed8000) (3) Data frame handling I0313 13:04:25.324240 6 log.go:172] (0xc002ed8000) (3) Data frame sent I0313 13:04:25.324246 6 log.go:172] (0xc00109f080) Data frame received for 3 I0313 13:04:25.324250 6 log.go:172] (0xc002ed8000) (3) Data frame handling I0313 13:04:25.325469 6 log.go:172] (0xc00109f080) Data frame received for 1 I0313 13:04:25.325490 6 log.go:172] (0xc002f79f40) (1) Data frame handling I0313 13:04:25.325499 6 log.go:172] (0xc002f79f40) (1) Data frame sent I0313 13:04:25.325513 6 log.go:172] (0xc00109f080) (0xc002f79f40) Stream removed, broadcasting: 1 I0313 13:04:25.325533 6 log.go:172] (0xc00109f080) Go away received I0313 13:04:25.325589 6 log.go:172] (0xc00109f080) (0xc002f79f40) Stream removed, broadcasting: 1 I0313 13:04:25.325606 6 log.go:172] (0xc00109f080) (0xc002ed8000) Stream removed, broadcasting: 3 I0313 13:04:25.325615 6 log.go:172] (0xc00109f080) (0xc002ed80a0) Stream removed, broadcasting: 5 Mar 13 13:04:25.325: INFO: Exec stderr: "" Mar 13 13:04:25.325: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5208 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:04:25.325: INFO: >>> kubeConfig: /root/.kube/config I0313 13:04:25.350594 6 log.go:172] (0xc002da6840) (0xc002173ea0) Create stream I0313 13:04:25.350624 6 log.go:172] (0xc002da6840) (0xc002173ea0) Stream added, broadcasting: 1 I0313 13:04:25.352736 6 log.go:172] (0xc002da6840) Reply frame received for 1 I0313 13:04:25.352769 6 log.go:172] (0xc002da6840) (0xc00315fe00) Create stream I0313 13:04:25.352782 6 log.go:172] (0xc002da6840) (0xc00315fe00) Stream added, broadcasting: 3 I0313 13:04:25.353668 6 log.go:172] (0xc002da6840) Reply frame received for 3 I0313 13:04:25.353704 6 log.go:172] (0xc002da6840) (0xc0030ae000) Create stream I0313 13:04:25.353716 6 log.go:172] (0xc002da6840) (0xc0030ae000) Stream added, broadcasting: 5 I0313 13:04:25.354450 6 log.go:172] (0xc002da6840) Reply frame received for 5 I0313 13:04:25.404329 6 log.go:172] (0xc002da6840) Data frame received for 5 I0313 13:04:25.404355 6 log.go:172] (0xc0030ae000) (5) Data frame handling I0313 13:04:25.404373 6 log.go:172] (0xc002da6840) Data frame received for 3 I0313 13:04:25.404383 6 log.go:172] (0xc00315fe00) (3) Data frame handling I0313 13:04:25.404393 6 log.go:172] (0xc00315fe00) (3) Data frame sent I0313 13:04:25.404403 6 log.go:172] (0xc002da6840) Data frame received for 3 I0313 13:04:25.404410 6 log.go:172] (0xc00315fe00) (3) Data frame handling I0313 13:04:25.405810 6 log.go:172] (0xc002da6840) Data frame received for 1 I0313 13:04:25.405868 6 log.go:172] (0xc002173ea0) (1) Data frame handling I0313 13:04:25.405894 6 log.go:172] (0xc002173ea0) (1) Data frame sent I0313 13:04:25.405910 6 log.go:172] (0xc002da6840) (0xc002173ea0) Stream removed, broadcasting: 1 I0313 13:04:25.405925 6 log.go:172] (0xc002da6840) Go away received I0313 13:04:25.406282 6 log.go:172] (0xc002da6840) (0xc002173ea0) Stream removed, broadcasting: 1 I0313 13:04:25.406298 6 log.go:172] (0xc002da6840) (0xc00315fe00) Stream removed, broadcasting: 3 I0313 13:04:25.406305 6 log.go:172] (0xc002da6840) (0xc0030ae000) Stream removed, broadcasting: 5 Mar 13 13:04:25.406: INFO: Exec stderr: "" Mar 13 13:04:25.406: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5208 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:04:25.406: INFO: >>> kubeConfig: /root/.kube/config I0313 13:04:25.432194 6 log.go:172] (0xc00109fef0) (0xc0030ae320) Create stream I0313 13:04:25.432218 6 log.go:172] (0xc00109fef0) (0xc0030ae320) Stream added, broadcasting: 1 I0313 13:04:25.441618 6 log.go:172] (0xc00109fef0) Reply frame received for 1 I0313 13:04:25.441661 6 log.go:172] (0xc00109fef0) (0xc002173f40) Create stream I0313 13:04:25.441671 6 log.go:172] (0xc00109fef0) (0xc002173f40) Stream added, broadcasting: 3 I0313 13:04:25.442639 6 log.go:172] (0xc00109fef0) Reply frame received for 3 I0313 13:04:25.442670 6 log.go:172] (0xc00109fef0) (0xc00315fea0) Create stream I0313 13:04:25.442683 6 log.go:172] (0xc00109fef0) (0xc00315fea0) Stream added, broadcasting: 5 I0313 13:04:25.443377 6 log.go:172] (0xc00109fef0) Reply frame received for 5 I0313 13:04:25.515987 6 log.go:172] (0xc00109fef0) Data frame received for 3 I0313 13:04:25.516018 6 log.go:172] (0xc002173f40) (3) Data frame handling I0313 13:04:25.516029 6 log.go:172] (0xc002173f40) (3) Data frame sent I0313 13:04:25.516039 6 log.go:172] (0xc00109fef0) Data frame received for 3 I0313 13:04:25.516047 6 log.go:172] (0xc002173f40) (3) Data frame handling I0313 13:04:25.516071 6 log.go:172] (0xc00109fef0) Data frame received for 5 I0313 13:04:25.516080 6 log.go:172] (0xc00315fea0) (5) Data frame handling I0313 13:04:25.517035 6 log.go:172] (0xc00109fef0) Data frame received for 1 I0313 13:04:25.517054 6 log.go:172] (0xc0030ae320) (1) Data frame handling I0313 13:04:25.517068 6 log.go:172] (0xc0030ae320) (1) Data frame sent I0313 13:04:25.517080 6 log.go:172] (0xc00109fef0) (0xc0030ae320) Stream removed, broadcasting: 1 I0313 13:04:25.517097 6 log.go:172] (0xc00109fef0) Go away received I0313 13:04:25.517243 6 log.go:172] (0xc00109fef0) (0xc0030ae320) Stream removed, broadcasting: 1 I0313 13:04:25.517271 6 log.go:172] (0xc00109fef0) (0xc002173f40) Stream removed, broadcasting: 3 I0313 13:04:25.517279 6 log.go:172] (0xc00109fef0) (0xc00315fea0) Stream removed, broadcasting: 5 Mar 13 13:04:25.517: INFO: Exec stderr: "" Mar 13 13:04:25.517: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5208 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:04:25.517: INFO: >>> kubeConfig: /root/.kube/config I0313 13:04:25.540893 6 log.go:172] (0xc002760370) (0xc00275e1e0) Create stream I0313 13:04:25.540917 6 log.go:172] (0xc002760370) (0xc00275e1e0) Stream added, broadcasting: 1 I0313 13:04:25.542453 6 log.go:172] (0xc002760370) Reply frame received for 1 I0313 13:04:25.542495 6 log.go:172] (0xc002760370) (0xc0016abb80) Create stream I0313 13:04:25.542504 6 log.go:172] (0xc002760370) (0xc0016abb80) Stream added, broadcasting: 3 I0313 13:04:25.543271 6 log.go:172] (0xc002760370) Reply frame received for 3 I0313 13:04:25.543296 6 log.go:172] (0xc002760370) (0xc00275e280) Create stream I0313 13:04:25.543305 6 log.go:172] (0xc002760370) (0xc00275e280) Stream added, broadcasting: 5 I0313 13:04:25.543925 6 log.go:172] (0xc002760370) Reply frame received for 5 I0313 13:04:25.608111 6 log.go:172] (0xc002760370) Data frame received for 3 I0313 13:04:25.608130 6 log.go:172] (0xc0016abb80) (3) Data frame handling I0313 13:04:25.608144 6 log.go:172] (0xc0016abb80) (3) Data frame sent I0313 13:04:25.608289 6 log.go:172] (0xc002760370) Data frame received for 3 I0313 13:04:25.608298 6 log.go:172] (0xc0016abb80) (3) Data frame handling I0313 13:04:25.608315 6 log.go:172] (0xc002760370) Data frame received for 5 I0313 13:04:25.608322 6 log.go:172] (0xc00275e280) (5) Data frame handling I0313 13:04:25.609541 6 log.go:172] (0xc002760370) Data frame received for 1 I0313 13:04:25.609564 6 log.go:172] (0xc00275e1e0) (1) Data frame handling I0313 13:04:25.609572 6 log.go:172] (0xc00275e1e0) (1) Data frame sent I0313 13:04:25.609580 6 log.go:172] (0xc002760370) (0xc00275e1e0) Stream removed, broadcasting: 1 I0313 13:04:25.609594 6 log.go:172] (0xc002760370) Go away received I0313 13:04:25.609725 6 log.go:172] (0xc002760370) (0xc00275e1e0) Stream removed, broadcasting: 1 I0313 13:04:25.609742 6 log.go:172] (0xc002760370) (0xc0016abb80) Stream removed, broadcasting: 3 I0313 13:04:25.609753 6 log.go:172] (0xc002760370) (0xc00275e280) Stream removed, broadcasting: 5 Mar 13 13:04:25.609: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 13 13:04:25.609: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5208 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:04:25.609: INFO: >>> kubeConfig: /root/.kube/config I0313 13:04:25.633410 6 log.go:172] (0xc002761130) (0xc00275e5a0) Create stream I0313 13:04:25.633436 6 log.go:172] (0xc002761130) (0xc00275e5a0) Stream added, broadcasting: 1 I0313 13:04:25.639033 6 log.go:172] (0xc002761130) Reply frame received for 1 I0313 13:04:25.639065 6 log.go:172] (0xc002761130) (0xc00315e000) Create stream I0313 13:04:25.639075 6 log.go:172] (0xc002761130) (0xc00315e000) Stream added, broadcasting: 3 I0313 13:04:25.639856 6 log.go:172] (0xc002761130) Reply frame received for 3 I0313 13:04:25.639895 6 log.go:172] (0xc002761130) (0xc000fd6000) Create stream I0313 13:04:25.639908 6 log.go:172] (0xc002761130) (0xc000fd6000) Stream added, broadcasting: 5 I0313 13:04:25.640636 6 log.go:172] (0xc002761130) Reply frame received for 5 I0313 13:04:25.704102 6 log.go:172] (0xc002761130) Data frame received for 3 I0313 13:04:25.704135 6 log.go:172] (0xc00315e000) (3) Data frame handling I0313 13:04:25.704144 6 log.go:172] (0xc00315e000) (3) Data frame sent I0313 13:04:25.704153 6 log.go:172] (0xc002761130) Data frame received for 3 I0313 13:04:25.704159 6 log.go:172] (0xc00315e000) (3) Data frame handling I0313 13:04:25.704207 6 log.go:172] (0xc002761130) Data frame received for 5 I0313 13:04:25.704246 6 log.go:172] (0xc000fd6000) (5) Data frame handling I0313 13:04:25.705031 6 log.go:172] (0xc002761130) Data frame received for 1 I0313 13:04:25.705050 6 log.go:172] (0xc00275e5a0) (1) Data frame handling I0313 13:04:25.705064 6 log.go:172] (0xc00275e5a0) (1) Data frame sent I0313 13:04:25.705076 6 log.go:172] (0xc002761130) (0xc00275e5a0) Stream removed, broadcasting: 1 I0313 13:04:25.705091 6 log.go:172] (0xc002761130) Go away received I0313 13:04:25.705196 6 log.go:172] (0xc002761130) (0xc00275e5a0) Stream removed, broadcasting: 1 I0313 13:04:25.705208 6 log.go:172] (0xc002761130) (0xc00315e000) Stream removed, broadcasting: 3 I0313 13:04:25.705214 6 log.go:172] (0xc002761130) (0xc000fd6000) Stream removed, broadcasting: 5 Mar 13 13:04:25.705: INFO: Exec stderr: "" Mar 13 13:04:25.705: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5208 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:04:25.705: INFO: >>> kubeConfig: /root/.kube/config I0313 13:04:25.727999 6 log.go:172] (0xc002a764d0) (0xc000fd6500) Create stream I0313 13:04:25.728018 6 log.go:172] (0xc002a764d0) (0xc000fd6500) Stream added, broadcasting: 1 I0313 13:04:25.729629 6 log.go:172] (0xc002a764d0) Reply frame received for 1 I0313 13:04:25.729660 6 log.go:172] (0xc002a764d0) (0xc0004f0140) Create stream I0313 13:04:25.729672 6 log.go:172] (0xc002a764d0) (0xc0004f0140) Stream added, broadcasting: 3 I0313 13:04:25.730491 6 log.go:172] (0xc002a764d0) Reply frame received for 3 I0313 13:04:25.730512 6 log.go:172] (0xc002a764d0) (0xc0018720a0) Create stream I0313 13:04:25.730522 6 log.go:172] (0xc002a764d0) (0xc0018720a0) Stream added, broadcasting: 5 I0313 13:04:25.731709 6 log.go:172] (0xc002a764d0) Reply frame received for 5 I0313 13:04:25.791904 6 log.go:172] (0xc002a764d0) Data frame received for 5 I0313 13:04:25.791932 6 log.go:172] (0xc0018720a0) (5) Data frame handling I0313 13:04:25.791949 6 log.go:172] (0xc002a764d0) Data frame received for 3 I0313 13:04:25.791961 6 log.go:172] (0xc0004f0140) (3) Data frame handling I0313 13:04:25.791975 6 log.go:172] (0xc0004f0140) (3) Data frame sent I0313 13:04:25.791980 6 log.go:172] (0xc002a764d0) Data frame received for 3 I0313 13:04:25.791987 6 log.go:172] (0xc0004f0140) (3) Data frame handling I0313 13:04:25.792895 6 log.go:172] (0xc002a764d0) Data frame received for 1 I0313 13:04:25.792916 6 log.go:172] (0xc000fd6500) (1) Data frame handling I0313 13:04:25.792933 6 log.go:172] (0xc000fd6500) (1) Data frame sent I0313 13:04:25.792945 6 log.go:172] (0xc002a764d0) (0xc000fd6500) Stream removed, broadcasting: 1 I0313 13:04:25.792958 6 log.go:172] (0xc002a764d0) Go away received I0313 13:04:25.793079 6 log.go:172] (0xc002a764d0) (0xc000fd6500) Stream removed, broadcasting: 1 I0313 13:04:25.793096 6 log.go:172] (0xc002a764d0) (0xc0004f0140) Stream removed, broadcasting: 3 I0313 13:04:25.793111 6 log.go:172] (0xc002a764d0) (0xc0018720a0) Stream removed, broadcasting: 5 Mar 13 13:04:25.793: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 13 13:04:25.793: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5208 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:04:25.793: INFO: >>> kubeConfig: /root/.kube/config I0313 13:04:25.816495 6 log.go:172] (0xc00049dd90) (0xc001872780) Create stream I0313 13:04:25.816517 6 log.go:172] (0xc00049dd90) (0xc001872780) Stream added, broadcasting: 1 I0313 13:04:25.818014 6 log.go:172] (0xc00049dd90) Reply frame received for 1 I0313 13:04:25.818041 6 log.go:172] (0xc00049dd90) (0xc0020e8000) Create stream I0313 13:04:25.818052 6 log.go:172] (0xc00049dd90) (0xc0020e8000) Stream added, broadcasting: 3 I0313 13:04:25.818718 6 log.go:172] (0xc00049dd90) Reply frame received for 3 I0313 13:04:25.818741 6 log.go:172] (0xc00049dd90) (0xc0004f01e0) Create stream I0313 13:04:25.818750 6 log.go:172] (0xc00049dd90) (0xc0004f01e0) Stream added, broadcasting: 5 I0313 13:04:25.819436 6 log.go:172] (0xc00049dd90) Reply frame received for 5 I0313 13:04:25.873836 6 log.go:172] (0xc00049dd90) Data frame received for 5 I0313 13:04:25.873867 6 log.go:172] (0xc0004f01e0) (5) Data frame handling I0313 13:04:25.873923 6 log.go:172] (0xc00049dd90) Data frame received for 3 I0313 13:04:25.873965 6 log.go:172] (0xc0020e8000) (3) Data frame handling I0313 13:04:25.873990 6 log.go:172] (0xc0020e8000) (3) Data frame sent I0313 13:04:25.874005 6 log.go:172] (0xc00049dd90) Data frame received for 3 I0313 13:04:25.874013 6 log.go:172] (0xc0020e8000) (3) Data frame handling I0313 13:04:25.875627 6 log.go:172] (0xc00049dd90) Data frame received for 1 I0313 13:04:25.875651 6 log.go:172] (0xc001872780) (1) Data frame handling I0313 13:04:25.875662 6 log.go:172] (0xc001872780) (1) Data frame sent I0313 13:04:25.875673 6 log.go:172] (0xc00049dd90) (0xc001872780) Stream removed, broadcasting: 1 I0313 13:04:25.875784 6 log.go:172] (0xc00049dd90) (0xc001872780) Stream removed, broadcasting: 1 I0313 13:04:25.875799 6 log.go:172] (0xc00049dd90) (0xc0020e8000) Stream removed, broadcasting: 3 I0313 13:04:25.875808 6 log.go:172] (0xc00049dd90) (0xc0004f01e0) Stream removed, broadcasting: 5 Mar 13 13:04:25.875: INFO: Exec stderr: "" Mar 13 13:04:25.875: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5208 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:04:25.875: INFO: >>> kubeConfig: /root/.kube/config I0313 13:04:25.877628 6 log.go:172] (0xc00049dd90) Go away received I0313 13:04:25.897029 6 log.go:172] (0xc000b06790) (0xc0004f08c0) Create stream I0313 13:04:25.897048 6 log.go:172] (0xc000b06790) (0xc0004f08c0) Stream added, broadcasting: 1 I0313 13:04:25.898708 6 log.go:172] (0xc000b06790) Reply frame received for 1 I0313 13:04:25.898752 6 log.go:172] (0xc000b06790) (0xc000fd6640) Create stream I0313 13:04:25.898765 6 log.go:172] (0xc000b06790) (0xc000fd6640) Stream added, broadcasting: 3 I0313 13:04:25.899481 6 log.go:172] (0xc000b06790) Reply frame received for 3 I0313 13:04:25.899510 6 log.go:172] (0xc000b06790) (0xc000fd6960) Create stream I0313 13:04:25.899519 6 log.go:172] (0xc000b06790) (0xc000fd6960) Stream added, broadcasting: 5 I0313 13:04:25.900335 6 log.go:172] (0xc000b06790) Reply frame received for 5 I0313 13:04:25.959484 6 log.go:172] (0xc000b06790) Data frame received for 5 I0313 13:04:25.959521 6 log.go:172] (0xc000fd6960) (5) Data frame handling I0313 13:04:25.959538 6 log.go:172] (0xc000b06790) Data frame received for 3 I0313 13:04:25.959546 6 log.go:172] (0xc000fd6640) (3) Data frame handling I0313 13:04:25.959557 6 log.go:172] (0xc000fd6640) (3) Data frame sent I0313 13:04:25.959568 6 log.go:172] (0xc000b06790) Data frame received for 3 I0313 13:04:25.959573 6 log.go:172] (0xc000fd6640) (3) Data frame handling I0313 13:04:25.960448 6 log.go:172] (0xc000b06790) Data frame received for 1 I0313 13:04:25.960460 6 log.go:172] (0xc0004f08c0) (1) Data frame handling I0313 13:04:25.960468 6 log.go:172] (0xc0004f08c0) (1) Data frame sent I0313 13:04:25.960480 6 log.go:172] (0xc000b06790) (0xc0004f08c0) Stream removed, broadcasting: 1 I0313 13:04:25.960493 6 log.go:172] (0xc000b06790) Go away received I0313 13:04:25.960619 6 log.go:172] (0xc000b06790) (0xc0004f08c0) Stream removed, broadcasting: 1 I0313 13:04:25.960636 6 log.go:172] (0xc000b06790) (0xc000fd6640) Stream removed, broadcasting: 3 I0313 13:04:25.960655 6 log.go:172] (0xc000b06790) (0xc000fd6960) Stream removed, broadcasting: 5 Mar 13 13:04:25.960: INFO: Exec stderr: "" Mar 13 13:04:25.960: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5208 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:04:25.960: INFO: >>> kubeConfig: /root/.kube/config I0313 13:04:25.980157 6 log.go:172] (0xc001304160) (0xc001872dc0) Create stream I0313 13:04:25.980179 6 log.go:172] (0xc001304160) (0xc001872dc0) Stream added, broadcasting: 1 I0313 13:04:25.981698 6 log.go:172] (0xc001304160) Reply frame received for 1 I0313 13:04:25.981721 6 log.go:172] (0xc001304160) (0xc0020e80a0) Create stream I0313 13:04:25.981730 6 log.go:172] (0xc001304160) (0xc0020e80a0) Stream added, broadcasting: 3 I0313 13:04:25.982287 6 log.go:172] (0xc001304160) Reply frame received for 3 I0313 13:04:25.982304 6 log.go:172] (0xc001304160) (0xc001872e60) Create stream I0313 13:04:25.982311 6 log.go:172] (0xc001304160) (0xc001872e60) Stream added, broadcasting: 5 I0313 13:04:25.982969 6 log.go:172] (0xc001304160) Reply frame received for 5 I0313 13:04:26.044028 6 log.go:172] (0xc001304160) Data frame received for 3 I0313 13:04:26.044049 6 log.go:172] (0xc0020e80a0) (3) Data frame handling I0313 13:04:26.044061 6 log.go:172] (0xc0020e80a0) (3) Data frame sent I0313 13:04:26.044067 6 log.go:172] (0xc001304160) Data frame received for 3 I0313 13:04:26.044074 6 log.go:172] (0xc0020e80a0) (3) Data frame handling I0313 13:04:26.044200 6 log.go:172] (0xc001304160) Data frame received for 5 I0313 13:04:26.044217 6 log.go:172] (0xc001872e60) (5) Data frame handling I0313 13:04:26.045304 6 log.go:172] (0xc001304160) Data frame received for 1 I0313 13:04:26.045318 6 log.go:172] (0xc001872dc0) (1) Data frame handling I0313 13:04:26.045331 6 log.go:172] (0xc001872dc0) (1) Data frame sent I0313 13:04:26.045341 6 log.go:172] (0xc001304160) (0xc001872dc0) Stream removed, broadcasting: 1 I0313 13:04:26.045351 6 log.go:172] (0xc001304160) Go away received I0313 13:04:26.045486 6 log.go:172] (0xc001304160) (0xc001872dc0) Stream removed, broadcasting: 1 I0313 13:04:26.045500 6 log.go:172] (0xc001304160) (0xc0020e80a0) Stream removed, broadcasting: 3 I0313 13:04:26.045506 6 log.go:172] (0xc001304160) (0xc001872e60) Stream removed, broadcasting: 5 Mar 13 13:04:26.045: INFO: Exec stderr: "" Mar 13 13:04:26.045: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5208 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:04:26.045: INFO: >>> kubeConfig: /root/.kube/config I0313 13:04:26.066256 6 log.go:172] (0xc002760dc0) (0xc0020e8460) Create stream I0313 13:04:26.066276 6 log.go:172] (0xc002760dc0) (0xc0020e8460) Stream added, broadcasting: 1 I0313 13:04:26.068488 6 log.go:172] (0xc002760dc0) Reply frame received for 1 I0313 13:04:26.068524 6 log.go:172] (0xc002760dc0) (0xc00315e0a0) Create stream I0313 13:04:26.068534 6 log.go:172] (0xc002760dc0) (0xc00315e0a0) Stream added, broadcasting: 3 I0313 13:04:26.069318 6 log.go:172] (0xc002760dc0) Reply frame received for 3 I0313 13:04:26.069340 6 log.go:172] (0xc002760dc0) (0xc00315e140) Create stream I0313 13:04:26.069348 6 log.go:172] (0xc002760dc0) (0xc00315e140) Stream added, broadcasting: 5 I0313 13:04:26.069944 6 log.go:172] (0xc002760dc0) Reply frame received for 5 I0313 13:04:26.144717 6 log.go:172] (0xc002760dc0) Data frame received for 5 I0313 13:04:26.144740 6 log.go:172] (0xc00315e140) (5) Data frame handling I0313 13:04:26.144766 6 log.go:172] (0xc002760dc0) Data frame received for 3 I0313 13:04:26.144774 6 log.go:172] (0xc00315e0a0) (3) Data frame handling I0313 13:04:26.144780 6 log.go:172] (0xc00315e0a0) (3) Data frame sent I0313 13:04:26.144787 6 log.go:172] (0xc002760dc0) Data frame received for 3 I0313 13:04:26.144791 6 log.go:172] (0xc00315e0a0) (3) Data frame handling I0313 13:04:26.145825 6 log.go:172] (0xc002760dc0) Data frame received for 1 I0313 13:04:26.145837 6 log.go:172] (0xc0020e8460) (1) Data frame handling I0313 13:04:26.145844 6 log.go:172] (0xc0020e8460) (1) Data frame sent I0313 13:04:26.145858 6 log.go:172] (0xc002760dc0) (0xc0020e8460) Stream removed, broadcasting: 1 I0313 13:04:26.145870 6 log.go:172] (0xc002760dc0) Go away received I0313 13:04:26.145995 6 log.go:172] (0xc002760dc0) (0xc0020e8460) Stream removed, broadcasting: 1 I0313 13:04:26.146013 6 log.go:172] (0xc002760dc0) (0xc00315e0a0) Stream removed, broadcasting: 3 I0313 13:04:26.146022 6 log.go:172] (0xc002760dc0) (0xc00315e140) Stream removed, broadcasting: 5 Mar 13 13:04:26.146: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:04:26.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5208" for this suite. Mar 13 13:05:16.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:05:16.222: INFO: namespace e2e-kubelet-etc-hosts-5208 deletion completed in 50.07189314s • [SLOW TEST:59.083 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:05:16.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-453260dd-d555-4b6e-98c7-499354d876db STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:05:20.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9033" for this suite. Mar 13 13:05:42.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:05:42.367: INFO: namespace configmap-9033 deletion completed in 22.069674879s • [SLOW TEST:26.145 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:05:42.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-cb3dcd2f-7503-4c74-88d8-55cdc62b67e0 in namespace container-probe-6573 Mar 13 13:05:46.463: INFO: Started pod test-webserver-cb3dcd2f-7503-4c74-88d8-55cdc62b67e0 in namespace container-probe-6573 STEP: checking the pod's current state and verifying that restartCount is present Mar 13 13:05:46.466: INFO: Initial restart count of pod test-webserver-cb3dcd2f-7503-4c74-88d8-55cdc62b67e0 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:09:46.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6573" for this suite. Mar 13 13:09:52.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:09:53.035: INFO: namespace container-probe-6573 deletion completed in 6.097270465s • [SLOW TEST:250.668 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:09:53.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-hbmk STEP: Creating a pod to test atomic-volume-subpath Mar 13 13:09:53.106: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hbmk" in namespace "subpath-74" to be "success or failure" Mar 13 13:09:53.110: INFO: Pod "pod-subpath-test-secret-hbmk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.448935ms Mar 13 13:09:55.114: INFO: Pod "pod-subpath-test-secret-hbmk": Phase="Running", Reason="", readiness=true. Elapsed: 2.008554058s Mar 13 13:09:57.118: INFO: Pod "pod-subpath-test-secret-hbmk": Phase="Running", Reason="", readiness=true. Elapsed: 4.012007562s Mar 13 13:09:59.121: INFO: Pod "pod-subpath-test-secret-hbmk": Phase="Running", Reason="", readiness=true. Elapsed: 6.015692442s Mar 13 13:10:01.125: INFO: Pod "pod-subpath-test-secret-hbmk": Phase="Running", Reason="", readiness=true. Elapsed: 8.019430537s Mar 13 13:10:03.129: INFO: Pod "pod-subpath-test-secret-hbmk": Phase="Running", Reason="", readiness=true. Elapsed: 10.023180866s Mar 13 13:10:05.132: INFO: Pod "pod-subpath-test-secret-hbmk": Phase="Running", Reason="", readiness=true. Elapsed: 12.026812169s Mar 13 13:10:07.136: INFO: Pod "pod-subpath-test-secret-hbmk": Phase="Running", Reason="", readiness=true. Elapsed: 14.030279404s Mar 13 13:10:09.139: INFO: Pod "pod-subpath-test-secret-hbmk": Phase="Running", Reason="", readiness=true. Elapsed: 16.033539883s Mar 13 13:10:11.143: INFO: Pod "pod-subpath-test-secret-hbmk": Phase="Running", Reason="", readiness=true. Elapsed: 18.037615535s Mar 13 13:10:13.147: INFO: Pod "pod-subpath-test-secret-hbmk": Phase="Running", Reason="", readiness=true. Elapsed: 20.041739923s Mar 13 13:10:15.151: INFO: Pod "pod-subpath-test-secret-hbmk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.045216081s STEP: Saw pod success Mar 13 13:10:15.151: INFO: Pod "pod-subpath-test-secret-hbmk" satisfied condition "success or failure" Mar 13 13:10:15.153: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-hbmk container test-container-subpath-secret-hbmk: STEP: delete the pod Mar 13 13:10:15.203: INFO: Waiting for pod pod-subpath-test-secret-hbmk to disappear Mar 13 13:10:15.212: INFO: Pod pod-subpath-test-secret-hbmk no longer exists STEP: Deleting pod pod-subpath-test-secret-hbmk Mar 13 13:10:15.212: INFO: Deleting pod "pod-subpath-test-secret-hbmk" in namespace "subpath-74" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:10:15.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-74" for this suite. Mar 13 13:10:21.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:10:21.308: INFO: namespace subpath-74 deletion completed in 6.091085343s • [SLOW TEST:28.272 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:10:21.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:10:23.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-744" for this suite. Mar 13 13:11:05.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:11:05.479: INFO: namespace kubelet-test-744 deletion completed in 42.082643315s • [SLOW TEST:44.171 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:11:05.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-6d9b84ce-4a08-475e-aaaf-efe1bb2859e5 in namespace container-probe-5054 Mar 13 13:11:07.553: INFO: Started pod liveness-6d9b84ce-4a08-475e-aaaf-efe1bb2859e5 in namespace container-probe-5054 STEP: checking the pod's current state and verifying that restartCount is present Mar 13 13:11:07.557: INFO: Initial restart count of pod liveness-6d9b84ce-4a08-475e-aaaf-efe1bb2859e5 is 0 Mar 13 13:11:19.579: INFO: Restart count of pod container-probe-5054/liveness-6d9b84ce-4a08-475e-aaaf-efe1bb2859e5 is now 1 (12.022408369s elapsed) Mar 13 13:11:39.620: INFO: Restart count of pod container-probe-5054/liveness-6d9b84ce-4a08-475e-aaaf-efe1bb2859e5 is now 2 (32.063058344s elapsed) Mar 13 13:11:59.656: INFO: Restart count of pod container-probe-5054/liveness-6d9b84ce-4a08-475e-aaaf-efe1bb2859e5 is now 3 (52.099550809s elapsed) Mar 13 13:12:19.690: INFO: Restart count of pod container-probe-5054/liveness-6d9b84ce-4a08-475e-aaaf-efe1bb2859e5 is now 4 (1m12.13330651s elapsed) Mar 13 13:13:19.802: INFO: Restart count of pod container-probe-5054/liveness-6d9b84ce-4a08-475e-aaaf-efe1bb2859e5 is now 5 (2m12.245316219s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:13:19.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5054" for this suite. Mar 13 13:13:25.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:13:25.916: INFO: namespace container-probe-5054 deletion completed in 6.095713358s • [SLOW TEST:140.437 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:13:25.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 13 13:13:25.989: INFO: Waiting up to 5m0s for pod "pod-d5dd7f99-d4fc-40ee-b3ce-e6985f5fdd04" in namespace "emptydir-3400" to be "success or failure" Mar 13 13:13:25.993: INFO: Pod "pod-d5dd7f99-d4fc-40ee-b3ce-e6985f5fdd04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078704ms Mar 13 13:13:27.997: INFO: Pod "pod-d5dd7f99-d4fc-40ee-b3ce-e6985f5fdd04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007910932s STEP: Saw pod success Mar 13 13:13:27.997: INFO: Pod "pod-d5dd7f99-d4fc-40ee-b3ce-e6985f5fdd04" satisfied condition "success or failure" Mar 13 13:13:28.000: INFO: Trying to get logs from node iruya-worker pod pod-d5dd7f99-d4fc-40ee-b3ce-e6985f5fdd04 container test-container: STEP: delete the pod Mar 13 13:13:28.018: INFO: Waiting for pod pod-d5dd7f99-d4fc-40ee-b3ce-e6985f5fdd04 to disappear Mar 13 13:13:28.023: INFO: Pod pod-d5dd7f99-d4fc-40ee-b3ce-e6985f5fdd04 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:13:28.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3400" for this suite. Mar 13 13:13:34.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:13:34.142: INFO: namespace emptydir-3400 deletion completed in 6.116069273s • [SLOW TEST:8.226 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:13:34.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 13 13:13:38.734: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fa1882a0-1dbd-4d2e-b1af-f7da758a2cd0" Mar 13 13:13:38.734: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fa1882a0-1dbd-4d2e-b1af-f7da758a2cd0" in namespace "pods-9787" to be "terminated due to deadline exceeded" Mar 13 13:13:38.755: INFO: Pod "pod-update-activedeadlineseconds-fa1882a0-1dbd-4d2e-b1af-f7da758a2cd0": Phase="Running", Reason="", readiness=true. Elapsed: 21.437536ms Mar 13 13:13:40.759: INFO: Pod "pod-update-activedeadlineseconds-fa1882a0-1dbd-4d2e-b1af-f7da758a2cd0": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.024899484s Mar 13 13:13:40.759: INFO: Pod "pod-update-activedeadlineseconds-fa1882a0-1dbd-4d2e-b1af-f7da758a2cd0" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:13:40.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9787" for this suite. Mar 13 13:13:46.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:13:46.849: INFO: namespace pods-9787 deletion completed in 6.087778937s • [SLOW TEST:12.707 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:13:46.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:13:46.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-386" for this suite. Mar 13 13:13:52.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:13:53.022: INFO: namespace services-386 deletion completed in 6.078810709s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.173 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:13:53.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 13:13:53.060: INFO: Creating deployment "test-recreate-deployment" Mar 13 13:13:53.068: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 13 13:13:53.122: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 13 13:13:55.127: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 13 13:13:55.129: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 13 13:13:55.135: INFO: Updating deployment test-recreate-deployment Mar 13 13:13:55.135: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 13 13:13:55.300: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2416,SelfLink:/apis/apps/v1/namespaces/deployment-2416/deployments/test-recreate-deployment,UID:4b0775f1-f481-4d19-99af-d9869bc4bdd2,ResourceVersion:903033,Generation:2,CreationTimestamp:2020-03-13 13:13:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-13 13:13:55 +0000 UTC 2020-03-13 13:13:55 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-13 13:13:55 +0000 UTC 2020-03-13 13:13:53 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 13 13:13:55.311: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-2416,SelfLink:/apis/apps/v1/namespaces/deployment-2416/replicasets/test-recreate-deployment-5c8c9cc69d,UID:33964b4b-51da-4be3-90e4-09d819c460f4,ResourceVersion:903032,Generation:1,CreationTimestamp:2020-03-13 13:13:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4b0775f1-f481-4d19-99af-d9869bc4bdd2 0xc0027b4d17 0xc0027b4d18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 13 13:13:55.311: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 13 13:13:55.311: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-2416,SelfLink:/apis/apps/v1/namespaces/deployment-2416/replicasets/test-recreate-deployment-6df85df6b9,UID:e0781f17-221c-48ac-92af-46bc51dcdc45,ResourceVersion:903022,Generation:2,CreationTimestamp:2020-03-13 13:13:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4b0775f1-f481-4d19-99af-d9869bc4bdd2 0xc0027b4de7 0xc0027b4de8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 13 13:13:55.314: INFO: Pod "test-recreate-deployment-5c8c9cc69d-pg7kg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-pg7kg,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-2416,SelfLink:/api/v1/namespaces/deployment-2416/pods/test-recreate-deployment-5c8c9cc69d-pg7kg,UID:0cbca260-987f-428a-9c7b-7eb8cfe41952,ResourceVersion:903034,Generation:0,CreationTimestamp:2020-03-13 13:13:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 33964b4b-51da-4be3-90e4-09d819c460f4 0xc0027b5717 0xc0027b5718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2x6pn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2x6pn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2x6pn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027b5790} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027b57b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:13:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:13:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:13:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:13:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-13 13:13:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:13:55.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2416" for this suite. Mar 13 13:14:01.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:14:01.413: INFO: namespace deployment-2416 deletion completed in 6.095759933s • [SLOW TEST:8.390 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:14:01.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-263591eb-9b05-4f05-9506-dcb8a040b61c STEP: Creating secret with name s-test-opt-upd-e6f74ff1-7a5d-40d2-a3d1-f741547ec4fb STEP: Creating the pod STEP: Deleting secret s-test-opt-del-263591eb-9b05-4f05-9506-dcb8a040b61c STEP: Updating secret s-test-opt-upd-e6f74ff1-7a5d-40d2-a3d1-f741547ec4fb STEP: Creating secret with name s-test-opt-create-f8ff7f41-c783-4c1d-9396-4699874b4480 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:15:31.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7732" for this suite. Mar 13 13:15:53.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:15:54.033: INFO: namespace projected-7732 deletion completed in 22.118279869s • [SLOW TEST:112.620 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:15:54.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-e20a669b-3f66-4bb2-a041-ae60c41c9a12 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:15:54.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9270" for this suite. Mar 13 13:16:00.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:16:00.196: INFO: namespace configmap-9270 deletion completed in 6.084180063s • [SLOW TEST:6.162 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:16:00.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-7517/configmap-test-875a0df8-b939-4258-8732-284c79b20b55 STEP: Creating a pod to test consume configMaps Mar 13 13:16:00.278: INFO: Waiting up to 5m0s for pod "pod-configmaps-36c3a37a-d9ca-403a-9950-1e47d0aa67ce" in namespace "configmap-7517" to be "success or failure" Mar 13 13:16:00.303: INFO: Pod "pod-configmaps-36c3a37a-d9ca-403a-9950-1e47d0aa67ce": Phase="Pending", Reason="", readiness=false. Elapsed: 25.289204ms Mar 13 13:16:02.307: INFO: Pod "pod-configmaps-36c3a37a-d9ca-403a-9950-1e47d0aa67ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02876414s Mar 13 13:16:04.309: INFO: Pod "pod-configmaps-36c3a37a-d9ca-403a-9950-1e47d0aa67ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031427603s STEP: Saw pod success Mar 13 13:16:04.309: INFO: Pod "pod-configmaps-36c3a37a-d9ca-403a-9950-1e47d0aa67ce" satisfied condition "success or failure" Mar 13 13:16:04.311: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-36c3a37a-d9ca-403a-9950-1e47d0aa67ce container env-test: STEP: delete the pod Mar 13 13:16:04.331: INFO: Waiting for pod pod-configmaps-36c3a37a-d9ca-403a-9950-1e47d0aa67ce to disappear Mar 13 13:16:04.342: INFO: Pod pod-configmaps-36c3a37a-d9ca-403a-9950-1e47d0aa67ce no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:16:04.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7517" for this suite. Mar 13 13:16:10.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:16:10.424: INFO: namespace configmap-7517 deletion completed in 6.079339641s • [SLOW TEST:10.228 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:16:10.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Mar 13 13:16:10.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 13 13:16:10.602: INFO: stderr: "" Mar 13 13:16:10.602: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:16:10.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-815" for this suite. Mar 13 13:16:16.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:16:16.673: INFO: namespace kubectl-815 deletion completed in 6.066792464s • [SLOW TEST:6.249 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:16:16.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-ppf8 STEP: Creating a pod to test atomic-volume-subpath Mar 13 13:16:16.733: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ppf8" in namespace "subpath-6135" to be "success or failure" Mar 13 13:16:16.755: INFO: Pod "pod-subpath-test-configmap-ppf8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.633455ms Mar 13 13:16:18.759: INFO: Pod "pod-subpath-test-configmap-ppf8": Phase="Running", Reason="", readiness=true. Elapsed: 2.026549248s Mar 13 13:16:20.763: INFO: Pod "pod-subpath-test-configmap-ppf8": Phase="Running", Reason="", readiness=true. Elapsed: 4.030168513s Mar 13 13:16:22.767: INFO: Pod "pod-subpath-test-configmap-ppf8": Phase="Running", Reason="", readiness=true. Elapsed: 6.034227506s Mar 13 13:16:24.769: INFO: Pod "pod-subpath-test-configmap-ppf8": Phase="Running", Reason="", readiness=true. Elapsed: 8.036700186s Mar 13 13:16:26.773: INFO: Pod "pod-subpath-test-configmap-ppf8": Phase="Running", Reason="", readiness=true. Elapsed: 10.04070478s Mar 13 13:16:28.777: INFO: Pod "pod-subpath-test-configmap-ppf8": Phase="Running", Reason="", readiness=true. Elapsed: 12.044001464s Mar 13 13:16:30.781: INFO: Pod "pod-subpath-test-configmap-ppf8": Phase="Running", Reason="", readiness=true. Elapsed: 14.047989634s Mar 13 13:16:32.784: INFO: Pod "pod-subpath-test-configmap-ppf8": Phase="Running", Reason="", readiness=true. Elapsed: 16.051712701s Mar 13 13:16:34.787: INFO: Pod "pod-subpath-test-configmap-ppf8": Phase="Running", Reason="", readiness=true. Elapsed: 18.05419912s Mar 13 13:16:36.791: INFO: Pod "pod-subpath-test-configmap-ppf8": Phase="Running", Reason="", readiness=true. Elapsed: 20.057901495s Mar 13 13:16:38.794: INFO: Pod "pod-subpath-test-configmap-ppf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.061192116s STEP: Saw pod success Mar 13 13:16:38.794: INFO: Pod "pod-subpath-test-configmap-ppf8" satisfied condition "success or failure" Mar 13 13:16:38.796: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-ppf8 container test-container-subpath-configmap-ppf8: STEP: delete the pod Mar 13 13:16:38.827: INFO: Waiting for pod pod-subpath-test-configmap-ppf8 to disappear Mar 13 13:16:38.858: INFO: Pod pod-subpath-test-configmap-ppf8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-ppf8 Mar 13 13:16:38.858: INFO: Deleting pod "pod-subpath-test-configmap-ppf8" in namespace "subpath-6135" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:16:38.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6135" for this suite. Mar 13 13:16:44.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:16:44.941: INFO: namespace subpath-6135 deletion completed in 6.07335239s • [SLOW TEST:28.268 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:16:44.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 13 13:16:44.989: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Mar 13 13:16:45.752: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 13 13:16:50.701: INFO: Waited 2.804293555s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:16:51.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-163" for this suite. Mar 13 13:16:57.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:16:57.308: INFO: namespace aggregator-163 deletion completed in 6.150273912s • [SLOW TEST:12.367 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:16:57.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-bd2314f1-ada8-42b0-bd07-c3c4a4fff4ad STEP: Creating a pod to test consume configMaps Mar 13 13:16:57.369: INFO: Waiting up to 5m0s for pod "pod-configmaps-061ba65e-3ecb-4bc0-8f62-2509defb5876" in namespace "configmap-853" to be "success or failure" Mar 13 13:16:57.373: INFO: Pod "pod-configmaps-061ba65e-3ecb-4bc0-8f62-2509defb5876": Phase="Pending", Reason="", readiness=false. Elapsed: 4.449851ms Mar 13 13:16:59.377: INFO: Pod "pod-configmaps-061ba65e-3ecb-4bc0-8f62-2509defb5876": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008284233s Mar 13 13:17:01.381: INFO: Pod "pod-configmaps-061ba65e-3ecb-4bc0-8f62-2509defb5876": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012026372s STEP: Saw pod success Mar 13 13:17:01.381: INFO: Pod "pod-configmaps-061ba65e-3ecb-4bc0-8f62-2509defb5876" satisfied condition "success or failure" Mar 13 13:17:01.383: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-061ba65e-3ecb-4bc0-8f62-2509defb5876 container configmap-volume-test: STEP: delete the pod Mar 13 13:17:01.400: INFO: Waiting for pod pod-configmaps-061ba65e-3ecb-4bc0-8f62-2509defb5876 to disappear Mar 13 13:17:01.403: INFO: Pod pod-configmaps-061ba65e-3ecb-4bc0-8f62-2509defb5876 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:17:01.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-853" for this suite. Mar 13 13:17:07.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:17:07.495: INFO: namespace configmap-853 deletion completed in 6.088624s • [SLOW TEST:10.187 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:17:07.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 13:17:07.540: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:17:09.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8665" for this suite. Mar 13 13:17:47.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:17:47.763: INFO: namespace pods-8665 deletion completed in 38.097079325s • [SLOW TEST:40.268 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:17:47.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 13 13:17:47.843: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:18:04.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2918" for this suite. Mar 13 13:18:10.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:18:10.447: INFO: namespace pods-2918 deletion completed in 6.118444942s • [SLOW TEST:22.684 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:18:10.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-495ee238-0026-4cb8-b2e7-76ce6ec69ba7 STEP: Creating a pod to test consume secrets Mar 13 13:18:10.502: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fca98bbe-5be1-4a90-800a-40db203175a2" in namespace "projected-868" to be "success or failure" Mar 13 13:18:10.510: INFO: Pod "pod-projected-secrets-fca98bbe-5be1-4a90-800a-40db203175a2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.269791ms Mar 13 13:18:12.514: INFO: Pod "pod-projected-secrets-fca98bbe-5be1-4a90-800a-40db203175a2": Phase="Running", Reason="", readiness=true. Elapsed: 2.011752982s Mar 13 13:18:14.517: INFO: Pod "pod-projected-secrets-fca98bbe-5be1-4a90-800a-40db203175a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0153327s STEP: Saw pod success Mar 13 13:18:14.517: INFO: Pod "pod-projected-secrets-fca98bbe-5be1-4a90-800a-40db203175a2" satisfied condition "success or failure" Mar 13 13:18:14.519: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-fca98bbe-5be1-4a90-800a-40db203175a2 container projected-secret-volume-test: STEP: delete the pod Mar 13 13:18:14.536: INFO: Waiting for pod pod-projected-secrets-fca98bbe-5be1-4a90-800a-40db203175a2 to disappear Mar 13 13:18:14.553: INFO: Pod pod-projected-secrets-fca98bbe-5be1-4a90-800a-40db203175a2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:18:14.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-868" for this suite. Mar 13 13:18:20.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:18:20.640: INFO: namespace projected-868 deletion completed in 6.083619414s • [SLOW TEST:10.192 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:18:20.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 13 13:18:24.747: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:24.763: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:26.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:26.767: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:28.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:28.767: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:30.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:30.767: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:32.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:32.767: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:34.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:34.766: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:36.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:36.767: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:38.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:38.767: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:40.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:40.767: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:42.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:42.766: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:44.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:44.767: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:46.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:46.766: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:48.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:48.768: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:50.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:50.766: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:52.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:52.767: INFO: Pod pod-with-poststart-exec-hook still exists Mar 13 13:18:54.763: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 13 13:18:54.767: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:18:54.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9213" for this suite. Mar 13 13:19:16.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:19:16.832: INFO: namespace container-lifecycle-hook-9213 deletion completed in 22.061442411s • [SLOW TEST:56.192 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:19:16.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 13 13:19:16.866: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:19:20.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6512" for this suite. Mar 13 13:19:26.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:19:26.789: INFO: namespace init-container-6512 deletion completed in 6.082946791s • [SLOW TEST:9.957 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:19:26.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-6dcc0ff6-8050-4d91-bbf1-7d8d895ea89a STEP: Creating a pod to test consume configMaps Mar 13 13:19:26.862: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bbdbb905-4b8e-453b-9a2e-05b891fe367c" in namespace "projected-2373" to be "success or failure" Mar 13 13:19:26.883: INFO: Pod "pod-projected-configmaps-bbdbb905-4b8e-453b-9a2e-05b891fe367c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.252695ms Mar 13 13:19:28.896: INFO: Pod "pod-projected-configmaps-bbdbb905-4b8e-453b-9a2e-05b891fe367c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.033962335s STEP: Saw pod success Mar 13 13:19:28.896: INFO: Pod "pod-projected-configmaps-bbdbb905-4b8e-453b-9a2e-05b891fe367c" satisfied condition "success or failure" Mar 13 13:19:28.898: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-bbdbb905-4b8e-453b-9a2e-05b891fe367c container projected-configmap-volume-test: STEP: delete the pod Mar 13 13:19:28.925: INFO: Waiting for pod pod-projected-configmaps-bbdbb905-4b8e-453b-9a2e-05b891fe367c to disappear Mar 13 13:19:28.932: INFO: Pod pod-projected-configmaps-bbdbb905-4b8e-453b-9a2e-05b891fe367c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:19:28.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2373" for this suite. Mar 13 13:19:34.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:19:35.025: INFO: namespace projected-2373 deletion completed in 6.089379174s • [SLOW TEST:8.236 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:19:35.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 13 13:19:35.112: INFO: Waiting up to 5m0s for pod "pod-9c41916a-c5a9-49c3-b633-05e60d686995" in namespace "emptydir-8883" to be "success or failure" Mar 13 13:19:35.128: INFO: Pod "pod-9c41916a-c5a9-49c3-b633-05e60d686995": Phase="Pending", Reason="", readiness=false. Elapsed: 16.604523ms Mar 13 13:19:37.132: INFO: Pod "pod-9c41916a-c5a9-49c3-b633-05e60d686995": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02055166s STEP: Saw pod success Mar 13 13:19:37.132: INFO: Pod "pod-9c41916a-c5a9-49c3-b633-05e60d686995" satisfied condition "success or failure" Mar 13 13:19:37.135: INFO: Trying to get logs from node iruya-worker pod pod-9c41916a-c5a9-49c3-b633-05e60d686995 container test-container: STEP: delete the pod Mar 13 13:19:37.154: INFO: Waiting for pod pod-9c41916a-c5a9-49c3-b633-05e60d686995 to disappear Mar 13 13:19:37.176: INFO: Pod pod-9c41916a-c5a9-49c3-b633-05e60d686995 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:19:37.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8883" for this suite. Mar 13 13:19:43.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:19:43.247: INFO: namespace emptydir-8883 deletion completed in 6.06764545s • [SLOW TEST:8.222 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:19:43.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Mar 13 13:19:43.288: INFO: Waiting up to 5m0s for pod "var-expansion-31fd29e9-213f-43ef-a319-9aade964bf6a" in namespace "var-expansion-5791" to be "success or failure" Mar 13 13:19:43.291: INFO: Pod "var-expansion-31fd29e9-213f-43ef-a319-9aade964bf6a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.662842ms Mar 13 13:19:45.295: INFO: Pod "var-expansion-31fd29e9-213f-43ef-a319-9aade964bf6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00764615s STEP: Saw pod success Mar 13 13:19:45.295: INFO: Pod "var-expansion-31fd29e9-213f-43ef-a319-9aade964bf6a" satisfied condition "success or failure" Mar 13 13:19:45.297: INFO: Trying to get logs from node iruya-worker pod var-expansion-31fd29e9-213f-43ef-a319-9aade964bf6a container dapi-container: STEP: delete the pod Mar 13 13:19:45.311: INFO: Waiting for pod var-expansion-31fd29e9-213f-43ef-a319-9aade964bf6a to disappear Mar 13 13:19:45.351: INFO: Pod var-expansion-31fd29e9-213f-43ef-a319-9aade964bf6a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:19:45.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5791" for this suite. Mar 13 13:19:51.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:19:51.428: INFO: namespace var-expansion-5791 deletion completed in 6.074377519s • [SLOW TEST:8.182 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:19:51.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 13:19:51.472: INFO: Waiting up to 5m0s for pod "downwardapi-volume-642d1684-ef4b-4a8f-8c1c-05e32eaeee77" in namespace "projected-1592" to be "success or failure" Mar 13 13:19:51.481: INFO: Pod "downwardapi-volume-642d1684-ef4b-4a8f-8c1c-05e32eaeee77": Phase="Pending", Reason="", readiness=false. Elapsed: 9.242386ms Mar 13 13:19:53.484: INFO: Pod "downwardapi-volume-642d1684-ef4b-4a8f-8c1c-05e32eaeee77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012564232s Mar 13 13:19:55.489: INFO: Pod "downwardapi-volume-642d1684-ef4b-4a8f-8c1c-05e32eaeee77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016773772s STEP: Saw pod success Mar 13 13:19:55.489: INFO: Pod "downwardapi-volume-642d1684-ef4b-4a8f-8c1c-05e32eaeee77" satisfied condition "success or failure" Mar 13 13:19:55.497: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-642d1684-ef4b-4a8f-8c1c-05e32eaeee77 container client-container: STEP: delete the pod Mar 13 13:19:55.516: INFO: Waiting for pod downwardapi-volume-642d1684-ef4b-4a8f-8c1c-05e32eaeee77 to disappear Mar 13 13:19:55.526: INFO: Pod downwardapi-volume-642d1684-ef4b-4a8f-8c1c-05e32eaeee77 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:19:55.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1592" for this suite. Mar 13 13:20:01.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:20:01.896: INFO: namespace projected-1592 deletion completed in 6.366335002s • [SLOW TEST:10.467 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:20:01.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 13 13:20:02.092: INFO: Waiting up to 5m0s for pod "pod-a683b437-6855-4cbb-a032-89f47c62a0d6" in namespace "emptydir-5230" to be "success or failure" Mar 13 13:20:02.095: INFO: Pod "pod-a683b437-6855-4cbb-a032-89f47c62a0d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.421963ms Mar 13 13:20:04.098: INFO: Pod "pod-a683b437-6855-4cbb-a032-89f47c62a0d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006087298s STEP: Saw pod success Mar 13 13:20:04.099: INFO: Pod "pod-a683b437-6855-4cbb-a032-89f47c62a0d6" satisfied condition "success or failure" Mar 13 13:20:04.101: INFO: Trying to get logs from node iruya-worker2 pod pod-a683b437-6855-4cbb-a032-89f47c62a0d6 container test-container: STEP: delete the pod Mar 13 13:20:04.127: INFO: Waiting for pod pod-a683b437-6855-4cbb-a032-89f47c62a0d6 to disappear Mar 13 13:20:04.137: INFO: Pod pod-a683b437-6855-4cbb-a032-89f47c62a0d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:20:04.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5230" for this suite. Mar 13 13:20:10.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:20:10.219: INFO: namespace emptydir-5230 deletion completed in 6.079089766s • [SLOW TEST:8.323 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:20:10.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 13 13:20:10.264: INFO: Waiting up to 5m0s for pod "pod-7346697a-f40a-483d-9ba9-7c6a0612d5bf" in namespace "emptydir-528" to be "success or failure" Mar 13 13:20:10.269: INFO: Pod "pod-7346697a-f40a-483d-9ba9-7c6a0612d5bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065965ms Mar 13 13:20:12.272: INFO: Pod "pod-7346697a-f40a-483d-9ba9-7c6a0612d5bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007438216s STEP: Saw pod success Mar 13 13:20:12.272: INFO: Pod "pod-7346697a-f40a-483d-9ba9-7c6a0612d5bf" satisfied condition "success or failure" Mar 13 13:20:12.274: INFO: Trying to get logs from node iruya-worker pod pod-7346697a-f40a-483d-9ba9-7c6a0612d5bf container test-container: STEP: delete the pod Mar 13 13:20:12.330: INFO: Waiting for pod pod-7346697a-f40a-483d-9ba9-7c6a0612d5bf to disappear Mar 13 13:20:12.339: INFO: Pod pod-7346697a-f40a-483d-9ba9-7c6a0612d5bf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:20:12.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-528" for this suite. Mar 13 13:20:18.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:20:18.421: INFO: namespace emptydir-528 deletion completed in 6.078631813s • [SLOW TEST:8.201 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:20:18.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 13 13:20:18.473: INFO: Waiting up to 5m0s for pod "downward-api-d8fe6361-6c4a-4504-bd40-825fbe0cf7cb" in namespace "downward-api-8762" to be "success or failure" Mar 13 13:20:18.491: INFO: Pod "downward-api-d8fe6361-6c4a-4504-bd40-825fbe0cf7cb": Phase="Pending", Reason="", readiness=false. Elapsed: 18.379138ms Mar 13 13:20:20.495: INFO: Pod "downward-api-d8fe6361-6c4a-4504-bd40-825fbe0cf7cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022000833s STEP: Saw pod success Mar 13 13:20:20.495: INFO: Pod "downward-api-d8fe6361-6c4a-4504-bd40-825fbe0cf7cb" satisfied condition "success or failure" Mar 13 13:20:20.497: INFO: Trying to get logs from node iruya-worker pod downward-api-d8fe6361-6c4a-4504-bd40-825fbe0cf7cb container dapi-container: STEP: delete the pod Mar 13 13:20:20.531: INFO: Waiting for pod downward-api-d8fe6361-6c4a-4504-bd40-825fbe0cf7cb to disappear Mar 13 13:20:20.543: INFO: Pod downward-api-d8fe6361-6c4a-4504-bd40-825fbe0cf7cb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:20:20.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8762" for this suite. Mar 13 13:20:26.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:20:26.651: INFO: namespace downward-api-8762 deletion completed in 6.105505075s • [SLOW TEST:8.230 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:20:26.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 13 13:20:26.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9552' Mar 13 13:20:28.182: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 13 13:20:28.182: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 13 13:20:28.251: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-bk7w4] Mar 13 13:20:28.251: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-bk7w4" in namespace "kubectl-9552" to be "running and ready" Mar 13 13:20:28.253: INFO: Pod "e2e-test-nginx-rc-bk7w4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.424989ms Mar 13 13:20:30.256: INFO: Pod "e2e-test-nginx-rc-bk7w4": Phase="Running", Reason="", readiness=true. Elapsed: 2.005745289s Mar 13 13:20:30.256: INFO: Pod "e2e-test-nginx-rc-bk7w4" satisfied condition "running and ready" Mar 13 13:20:30.256: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-bk7w4] Mar 13 13:20:30.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-9552' Mar 13 13:20:30.363: INFO: stderr: "" Mar 13 13:20:30.363: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Mar 13 13:20:30.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9552' Mar 13 13:20:30.437: INFO: stderr: "" Mar 13 13:20:30.437: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:20:30.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9552" for this suite. Mar 13 13:20:52.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:20:52.556: INFO: namespace kubectl-9552 deletion completed in 22.102375937s • [SLOW TEST:25.905 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:20:52.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8732 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 13 13:20:52.636: INFO: Found 0 stateful pods, waiting for 3 Mar 13 13:21:02.640: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 13 13:21:02.640: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 13 13:21:02.640: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 13 13:21:02.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8732 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 13 13:21:02.868: INFO: stderr: "I0313 13:21:02.779558 245 log.go:172] (0xc00092c0b0) (0xc00070a8c0) Create stream\nI0313 13:21:02.779606 245 log.go:172] (0xc00092c0b0) (0xc00070a8c0) Stream added, broadcasting: 1\nI0313 13:21:02.781633 245 log.go:172] (0xc00092c0b0) Reply frame received for 1\nI0313 13:21:02.781665 245 log.go:172] (0xc00092c0b0) (0xc0008a4000) Create stream\nI0313 13:21:02.781678 245 log.go:172] (0xc00092c0b0) (0xc0008a4000) Stream added, broadcasting: 3\nI0313 13:21:02.782398 245 log.go:172] (0xc00092c0b0) Reply frame received for 3\nI0313 13:21:02.782424 245 log.go:172] (0xc00092c0b0) (0xc00028a000) Create stream\nI0313 13:21:02.782433 245 log.go:172] (0xc00092c0b0) (0xc00028a000) Stream added, broadcasting: 5\nI0313 13:21:02.783057 245 log.go:172] (0xc00092c0b0) Reply frame received for 5\nI0313 13:21:02.843886 245 log.go:172] (0xc00092c0b0) Data frame received for 5\nI0313 13:21:02.843922 245 log.go:172] (0xc00028a000) (5) Data frame handling\nI0313 13:21:02.843945 245 log.go:172] (0xc00028a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0313 13:21:02.863083 245 log.go:172] (0xc00092c0b0) Data frame received for 5\nI0313 13:21:02.863116 245 log.go:172] (0xc00028a000) (5) Data frame handling\nI0313 13:21:02.863149 245 log.go:172] (0xc00092c0b0) Data frame received for 3\nI0313 13:21:02.863180 245 log.go:172] (0xc0008a4000) (3) Data frame handling\nI0313 13:21:02.863204 245 log.go:172] (0xc0008a4000) (3) Data frame sent\nI0313 13:21:02.863223 245 log.go:172] (0xc00092c0b0) Data frame received for 3\nI0313 13:21:02.863238 245 log.go:172] (0xc0008a4000) (3) Data frame handling\nI0313 13:21:02.864522 245 log.go:172] (0xc00092c0b0) Data frame received for 1\nI0313 13:21:02.864560 245 log.go:172] (0xc00070a8c0) (1) Data frame handling\nI0313 13:21:02.864578 245 log.go:172] (0xc00070a8c0) (1) Data frame sent\nI0313 13:21:02.864595 245 log.go:172] (0xc00092c0b0) (0xc00070a8c0) Stream removed, broadcasting: 1\nI0313 13:21:02.864611 245 log.go:172] (0xc00092c0b0) Go away received\nI0313 13:21:02.864953 245 log.go:172] (0xc00092c0b0) (0xc00070a8c0) Stream removed, broadcasting: 1\nI0313 13:21:02.864974 245 log.go:172] (0xc00092c0b0) (0xc0008a4000) Stream removed, broadcasting: 3\nI0313 13:21:02.864988 245 log.go:172] (0xc00092c0b0) (0xc00028a000) Stream removed, broadcasting: 5\n" Mar 13 13:21:02.868: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 13 13:21:02.868: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 13 13:21:12.898: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 13 13:21:22.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8732 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 13:21:23.104: INFO: stderr: "I0313 13:21:23.033605 267 log.go:172] (0xc000116dc0) (0xc00059a640) Create stream\nI0313 13:21:23.033660 267 log.go:172] (0xc000116dc0) (0xc00059a640) Stream added, broadcasting: 1\nI0313 13:21:23.039924 267 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0313 13:21:23.039969 267 log.go:172] (0xc000116dc0) (0xc00059a6e0) Create stream\nI0313 13:21:23.039983 267 log.go:172] (0xc000116dc0) (0xc00059a6e0) Stream added, broadcasting: 3\nI0313 13:21:23.041162 267 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0313 13:21:23.041211 267 log.go:172] (0xc000116dc0) (0xc00059a780) Create stream\nI0313 13:21:23.041228 267 log.go:172] (0xc000116dc0) (0xc00059a780) Stream added, broadcasting: 5\nI0313 13:21:23.042268 267 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0313 13:21:23.100227 267 log.go:172] (0xc000116dc0) Data frame received for 5\nI0313 13:21:23.100277 267 log.go:172] (0xc00059a780) (5) Data frame handling\nI0313 13:21:23.100292 267 log.go:172] (0xc00059a780) (5) Data frame sent\nI0313 13:21:23.100300 267 log.go:172] (0xc000116dc0) Data frame received for 5\nI0313 13:21:23.100308 267 log.go:172] (0xc00059a780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0313 13:21:23.100332 267 log.go:172] (0xc000116dc0) Data frame received for 3\nI0313 13:21:23.100357 267 log.go:172] (0xc00059a6e0) (3) Data frame handling\nI0313 13:21:23.100374 267 log.go:172] (0xc00059a6e0) (3) Data frame sent\nI0313 13:21:23.100383 267 log.go:172] (0xc000116dc0) Data frame received for 3\nI0313 13:21:23.100388 267 log.go:172] (0xc00059a6e0) (3) Data frame handling\nI0313 13:21:23.101426 267 log.go:172] (0xc000116dc0) Data frame received for 1\nI0313 13:21:23.101449 267 log.go:172] (0xc00059a640) (1) Data frame handling\nI0313 13:21:23.101461 267 log.go:172] (0xc00059a640) (1) Data frame sent\nI0313 13:21:23.101477 267 log.go:172] (0xc000116dc0) (0xc00059a640) Stream removed, broadcasting: 1\nI0313 13:21:23.101498 267 log.go:172] (0xc000116dc0) Go away received\nI0313 13:21:23.101882 267 log.go:172] (0xc000116dc0) (0xc00059a640) Stream removed, broadcasting: 1\nI0313 13:21:23.101898 267 log.go:172] (0xc000116dc0) (0xc00059a6e0) Stream removed, broadcasting: 3\nI0313 13:21:23.101905 267 log.go:172] (0xc000116dc0) (0xc00059a780) Stream removed, broadcasting: 5\n" Mar 13 13:21:23.105: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 13 13:21:23.105: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 13 13:21:33.124: INFO: Waiting for StatefulSet statefulset-8732/ss2 to complete update Mar 13 13:21:33.124: INFO: Waiting for Pod statefulset-8732/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 13 13:21:33.124: INFO: Waiting for Pod statefulset-8732/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 13 13:21:33.124: INFO: Waiting for Pod statefulset-8732/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 13 13:21:43.128: INFO: Waiting for StatefulSet statefulset-8732/ss2 to complete update Mar 13 13:21:43.128: INFO: Waiting for Pod statefulset-8732/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 13 13:21:43.128: INFO: Waiting for Pod statefulset-8732/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 13 13:21:53.131: INFO: Waiting for StatefulSet statefulset-8732/ss2 to complete update Mar 13 13:21:53.131: INFO: Waiting for Pod statefulset-8732/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Mar 13 13:22:03.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8732 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 13 13:22:03.391: INFO: stderr: "I0313 13:22:03.281385 287 log.go:172] (0xc000116dc0) (0xc000676820) Create stream\nI0313 13:22:03.281428 287 log.go:172] (0xc000116dc0) (0xc000676820) Stream added, broadcasting: 1\nI0313 13:22:03.283527 287 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0313 13:22:03.283561 287 log.go:172] (0xc000116dc0) (0xc000806000) Create stream\nI0313 13:22:03.283574 287 log.go:172] (0xc000116dc0) (0xc000806000) Stream added, broadcasting: 3\nI0313 13:22:03.284569 287 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0313 13:22:03.284605 287 log.go:172] (0xc000116dc0) (0xc0006768c0) Create stream\nI0313 13:22:03.284617 287 log.go:172] (0xc000116dc0) (0xc0006768c0) Stream added, broadcasting: 5\nI0313 13:22:03.285996 287 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0313 13:22:03.359872 287 log.go:172] (0xc000116dc0) Data frame received for 5\nI0313 13:22:03.359895 287 log.go:172] (0xc0006768c0) (5) Data frame handling\nI0313 13:22:03.359911 287 log.go:172] (0xc0006768c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0313 13:22:03.386199 287 log.go:172] (0xc000116dc0) Data frame received for 3\nI0313 13:22:03.386236 287 log.go:172] (0xc000806000) (3) Data frame handling\nI0313 13:22:03.386255 287 log.go:172] (0xc000806000) (3) Data frame sent\nI0313 13:22:03.386266 287 log.go:172] (0xc000116dc0) Data frame received for 3\nI0313 13:22:03.386274 287 log.go:172] (0xc000806000) (3) Data frame handling\nI0313 13:22:03.386558 287 log.go:172] (0xc000116dc0) Data frame received for 5\nI0313 13:22:03.386576 287 log.go:172] (0xc0006768c0) (5) Data frame handling\nI0313 13:22:03.387867 287 log.go:172] (0xc000116dc0) Data frame received for 1\nI0313 13:22:03.387885 287 log.go:172] (0xc000676820) (1) Data frame handling\nI0313 13:22:03.387892 287 log.go:172] (0xc000676820) (1) Data frame sent\nI0313 13:22:03.387903 287 log.go:172] (0xc000116dc0) (0xc000676820) Stream removed, broadcasting: 1\nI0313 13:22:03.387914 287 log.go:172] (0xc000116dc0) Go away received\nI0313 13:22:03.388222 287 log.go:172] (0xc000116dc0) (0xc000676820) Stream removed, broadcasting: 1\nI0313 13:22:03.388241 287 log.go:172] (0xc000116dc0) (0xc000806000) Stream removed, broadcasting: 3\nI0313 13:22:03.388253 287 log.go:172] (0xc000116dc0) (0xc0006768c0) Stream removed, broadcasting: 5\n" Mar 13 13:22:03.391: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 13 13:22:03.391: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 13 13:22:13.443: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 13 13:22:23.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8732 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 13:22:23.678: INFO: stderr: "I0313 13:22:23.593085 307 log.go:172] (0xc00076a420) (0xc000710640) Create stream\nI0313 13:22:23.593129 307 log.go:172] (0xc00076a420) (0xc000710640) Stream added, broadcasting: 1\nI0313 13:22:23.595017 307 log.go:172] (0xc00076a420) Reply frame received for 1\nI0313 13:22:23.595044 307 log.go:172] (0xc00076a420) (0xc0007106e0) Create stream\nI0313 13:22:23.595054 307 log.go:172] (0xc00076a420) (0xc0007106e0) Stream added, broadcasting: 3\nI0313 13:22:23.595790 307 log.go:172] (0xc00076a420) Reply frame received for 3\nI0313 13:22:23.595816 307 log.go:172] (0xc00076a420) (0xc0006d0280) Create stream\nI0313 13:22:23.595824 307 log.go:172] (0xc00076a420) (0xc0006d0280) Stream added, broadcasting: 5\nI0313 13:22:23.596472 307 log.go:172] (0xc00076a420) Reply frame received for 5\nI0313 13:22:23.674973 307 log.go:172] (0xc00076a420) Data frame received for 3\nI0313 13:22:23.674998 307 log.go:172] (0xc0007106e0) (3) Data frame handling\nI0313 13:22:23.675005 307 log.go:172] (0xc0007106e0) (3) Data frame sent\nI0313 13:22:23.675010 307 log.go:172] (0xc00076a420) Data frame received for 3\nI0313 13:22:23.675014 307 log.go:172] (0xc0007106e0) (3) Data frame handling\nI0313 13:22:23.675031 307 log.go:172] (0xc00076a420) Data frame received for 5\nI0313 13:22:23.675036 307 log.go:172] (0xc0006d0280) (5) Data frame handling\nI0313 13:22:23.675041 307 log.go:172] (0xc0006d0280) (5) Data frame sent\nI0313 13:22:23.675045 307 log.go:172] (0xc00076a420) Data frame received for 5\nI0313 13:22:23.675050 307 log.go:172] (0xc0006d0280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0313 13:22:23.676226 307 log.go:172] (0xc00076a420) Data frame received for 1\nI0313 13:22:23.676244 307 log.go:172] (0xc000710640) (1) Data frame handling\nI0313 13:22:23.676252 307 log.go:172] (0xc000710640) (1) Data frame sent\nI0313 13:22:23.676261 307 log.go:172] (0xc00076a420) (0xc000710640) Stream removed, broadcasting: 1\nI0313 13:22:23.676271 307 log.go:172] (0xc00076a420) Go away received\nI0313 13:22:23.676563 307 log.go:172] (0xc00076a420) (0xc000710640) Stream removed, broadcasting: 1\nI0313 13:22:23.676576 307 log.go:172] (0xc00076a420) (0xc0007106e0) Stream removed, broadcasting: 3\nI0313 13:22:23.676582 307 log.go:172] (0xc00076a420) (0xc0006d0280) Stream removed, broadcasting: 5\n" Mar 13 13:22:23.678: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 13 13:22:23.678: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 13 13:22:33.698: INFO: Waiting for StatefulSet statefulset-8732/ss2 to complete update Mar 13 13:22:33.698: INFO: Waiting for Pod statefulset-8732/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 13 13:22:33.698: INFO: Waiting for Pod statefulset-8732/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 13 13:22:33.698: INFO: Waiting for Pod statefulset-8732/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 13 13:22:43.706: INFO: Waiting for StatefulSet statefulset-8732/ss2 to complete update Mar 13 13:22:43.706: INFO: Waiting for Pod statefulset-8732/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 13 13:22:43.706: INFO: Waiting for Pod statefulset-8732/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 13 13:22:53.705: INFO: Waiting for StatefulSet statefulset-8732/ss2 to complete update Mar 13 13:22:53.705: INFO: Waiting for Pod statefulset-8732/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 13 13:23:03.705: INFO: Deleting all statefulset in ns statefulset-8732 Mar 13 13:23:03.707: INFO: Scaling statefulset ss2 to 0 Mar 13 13:23:43.724: INFO: Waiting for statefulset status.replicas updated to 0 Mar 13 13:23:43.726: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:23:43.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8732" for this suite. Mar 13 13:23:49.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:23:49.840: INFO: namespace statefulset-8732 deletion completed in 6.094748637s • [SLOW TEST:177.283 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:23:49.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 13 13:23:49.877: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3940,SelfLink:/api/v1/namespaces/watch-3940/configmaps/e2e-watch-test-configmap-a,UID:fd9bccba-4b83-4c58-95af-bfea99d14900,ResourceVersion:905125,Generation:0,CreationTimestamp:2020-03-13 13:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 13 13:23:49.877: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3940,SelfLink:/api/v1/namespaces/watch-3940/configmaps/e2e-watch-test-configmap-a,UID:fd9bccba-4b83-4c58-95af-bfea99d14900,ResourceVersion:905125,Generation:0,CreationTimestamp:2020-03-13 13:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 13 13:23:59.884: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3940,SelfLink:/api/v1/namespaces/watch-3940/configmaps/e2e-watch-test-configmap-a,UID:fd9bccba-4b83-4c58-95af-bfea99d14900,ResourceVersion:905145,Generation:0,CreationTimestamp:2020-03-13 13:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 13 13:23:59.884: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3940,SelfLink:/api/v1/namespaces/watch-3940/configmaps/e2e-watch-test-configmap-a,UID:fd9bccba-4b83-4c58-95af-bfea99d14900,ResourceVersion:905145,Generation:0,CreationTimestamp:2020-03-13 13:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 13 13:24:09.899: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3940,SelfLink:/api/v1/namespaces/watch-3940/configmaps/e2e-watch-test-configmap-a,UID:fd9bccba-4b83-4c58-95af-bfea99d14900,ResourceVersion:905165,Generation:0,CreationTimestamp:2020-03-13 13:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 13 13:24:09.899: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3940,SelfLink:/api/v1/namespaces/watch-3940/configmaps/e2e-watch-test-configmap-a,UID:fd9bccba-4b83-4c58-95af-bfea99d14900,ResourceVersion:905165,Generation:0,CreationTimestamp:2020-03-13 13:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 13 13:24:19.905: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3940,SelfLink:/api/v1/namespaces/watch-3940/configmaps/e2e-watch-test-configmap-a,UID:fd9bccba-4b83-4c58-95af-bfea99d14900,ResourceVersion:905185,Generation:0,CreationTimestamp:2020-03-13 13:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 13 13:24:19.906: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3940,SelfLink:/api/v1/namespaces/watch-3940/configmaps/e2e-watch-test-configmap-a,UID:fd9bccba-4b83-4c58-95af-bfea99d14900,ResourceVersion:905185,Generation:0,CreationTimestamp:2020-03-13 13:23:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 13 13:24:29.911: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3940,SelfLink:/api/v1/namespaces/watch-3940/configmaps/e2e-watch-test-configmap-b,UID:06b2902d-0f9a-4ec3-a868-7d9d4a2b758d,ResourceVersion:905205,Generation:0,CreationTimestamp:2020-03-13 13:24:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 13 13:24:29.911: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3940,SelfLink:/api/v1/namespaces/watch-3940/configmaps/e2e-watch-test-configmap-b,UID:06b2902d-0f9a-4ec3-a868-7d9d4a2b758d,ResourceVersion:905205,Generation:0,CreationTimestamp:2020-03-13 13:24:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 13 13:24:39.916: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3940,SelfLink:/api/v1/namespaces/watch-3940/configmaps/e2e-watch-test-configmap-b,UID:06b2902d-0f9a-4ec3-a868-7d9d4a2b758d,ResourceVersion:905228,Generation:0,CreationTimestamp:2020-03-13 13:24:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 13 13:24:39.916: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3940,SelfLink:/api/v1/namespaces/watch-3940/configmaps/e2e-watch-test-configmap-b,UID:06b2902d-0f9a-4ec3-a868-7d9d4a2b758d,ResourceVersion:905228,Generation:0,CreationTimestamp:2020-03-13 13:24:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:24:49.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3940" for this suite. Mar 13 13:24:55.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:24:56.014: INFO: namespace watch-3940 deletion completed in 6.093552218s • [SLOW TEST:66.174 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:24:56.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 13 13:24:56.052: INFO: Waiting up to 5m0s for pod "pod-e5ef89aa-6568-4cf4-8818-9900bea5284a" in namespace "emptydir-8778" to be "success or failure" Mar 13 13:24:56.057: INFO: Pod "pod-e5ef89aa-6568-4cf4-8818-9900bea5284a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.032561ms Mar 13 13:24:58.061: INFO: Pod "pod-e5ef89aa-6568-4cf4-8818-9900bea5284a": Phase="Running", Reason="", readiness=true. Elapsed: 2.008674248s Mar 13 13:25:00.065: INFO: Pod "pod-e5ef89aa-6568-4cf4-8818-9900bea5284a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012454953s STEP: Saw pod success Mar 13 13:25:00.065: INFO: Pod "pod-e5ef89aa-6568-4cf4-8818-9900bea5284a" satisfied condition "success or failure" Mar 13 13:25:00.067: INFO: Trying to get logs from node iruya-worker pod pod-e5ef89aa-6568-4cf4-8818-9900bea5284a container test-container: STEP: delete the pod Mar 13 13:25:00.097: INFO: Waiting for pod pod-e5ef89aa-6568-4cf4-8818-9900bea5284a to disappear Mar 13 13:25:00.100: INFO: Pod pod-e5ef89aa-6568-4cf4-8818-9900bea5284a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:25:00.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8778" for this suite. Mar 13 13:25:06.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:25:06.165: INFO: namespace emptydir-8778 deletion completed in 6.060712861s • [SLOW TEST:10.150 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:25:06.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6634ab49-8b56-4331-9d43-9f6cbfcdf1b4 STEP: Creating a pod to test consume secrets Mar 13 13:25:06.238: INFO: Waiting up to 5m0s for pod "pod-secrets-88d4e067-68f6-4fad-a9d9-cf969d1f8284" in namespace "secrets-7343" to be "success or failure" Mar 13 13:25:06.243: INFO: Pod "pod-secrets-88d4e067-68f6-4fad-a9d9-cf969d1f8284": Phase="Pending", Reason="", readiness=false. Elapsed: 4.85936ms Mar 13 13:25:08.445: INFO: Pod "pod-secrets-88d4e067-68f6-4fad-a9d9-cf969d1f8284": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.207123415s STEP: Saw pod success Mar 13 13:25:08.445: INFO: Pod "pod-secrets-88d4e067-68f6-4fad-a9d9-cf969d1f8284" satisfied condition "success or failure" Mar 13 13:25:08.448: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-88d4e067-68f6-4fad-a9d9-cf969d1f8284 container secret-volume-test: STEP: delete the pod Mar 13 13:25:08.478: INFO: Waiting for pod pod-secrets-88d4e067-68f6-4fad-a9d9-cf969d1f8284 to disappear Mar 13 13:25:08.489: INFO: Pod pod-secrets-88d4e067-68f6-4fad-a9d9-cf969d1f8284 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:25:08.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7343" for this suite. Mar 13 13:25:14.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:25:14.592: INFO: namespace secrets-7343 deletion completed in 6.100900962s • [SLOW TEST:8.427 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:25:14.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 13 13:25:17.168: INFO: Successfully updated pod "labelsupdate2b504278-e81a-4c39-ab70-7d4f3ddba7a0" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:25:19.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8367" for this suite. Mar 13 13:25:41.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:25:41.250: INFO: namespace projected-8367 deletion completed in 22.055359597s • [SLOW TEST:26.657 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:25:41.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Mar 13 13:25:41.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5704' Mar 13 13:25:41.559: INFO: stderr: "" Mar 13 13:25:41.559: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 13 13:25:41.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5704' Mar 13 13:25:41.648: INFO: stderr: "" Mar 13 13:25:41.648: INFO: stdout: "update-demo-nautilus-572ws update-demo-nautilus-ltr6n " Mar 13 13:25:41.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-572ws -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5704' Mar 13 13:25:41.709: INFO: stderr: "" Mar 13 13:25:41.709: INFO: stdout: "" Mar 13 13:25:41.709: INFO: update-demo-nautilus-572ws is created but not running Mar 13 13:25:46.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5704' Mar 13 13:25:46.770: INFO: stderr: "" Mar 13 13:25:46.770: INFO: stdout: "update-demo-nautilus-572ws update-demo-nautilus-ltr6n " Mar 13 13:25:46.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-572ws -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5704' Mar 13 13:25:46.826: INFO: stderr: "" Mar 13 13:25:46.826: INFO: stdout: "true" Mar 13 13:25:46.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-572ws -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5704' Mar 13 13:25:46.882: INFO: stderr: "" Mar 13 13:25:46.883: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 13 13:25:46.883: INFO: validating pod update-demo-nautilus-572ws Mar 13 13:25:46.885: INFO: got data: { "image": "nautilus.jpg" } Mar 13 13:25:46.885: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 13 13:25:46.885: INFO: update-demo-nautilus-572ws is verified up and running Mar 13 13:25:46.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ltr6n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5704' Mar 13 13:25:46.943: INFO: stderr: "" Mar 13 13:25:46.943: INFO: stdout: "true" Mar 13 13:25:46.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ltr6n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5704' Mar 13 13:25:46.999: INFO: stderr: "" Mar 13 13:25:46.999: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 13 13:25:46.999: INFO: validating pod update-demo-nautilus-ltr6n Mar 13 13:25:47.015: INFO: got data: { "image": "nautilus.jpg" } Mar 13 13:25:47.015: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 13 13:25:47.015: INFO: update-demo-nautilus-ltr6n is verified up and running STEP: rolling-update to new replication controller Mar 13 13:25:47.016: INFO: scanned /root for discovery docs: Mar 13 13:25:47.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5704' Mar 13 13:26:09.477: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 13 13:26:09.477: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 13 13:26:09.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5704' Mar 13 13:26:09.536: INFO: stderr: "" Mar 13 13:26:09.536: INFO: stdout: "update-demo-kitten-vcsvj update-demo-kitten-vx7kd " Mar 13 13:26:09.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vcsvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5704' Mar 13 13:26:09.592: INFO: stderr: "" Mar 13 13:26:09.592: INFO: stdout: "true" Mar 13 13:26:09.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vcsvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5704' Mar 13 13:26:09.647: INFO: stderr: "" Mar 13 13:26:09.647: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 13 13:26:09.647: INFO: validating pod update-demo-kitten-vcsvj Mar 13 13:26:09.650: INFO: got data: { "image": "kitten.jpg" } Mar 13 13:26:09.650: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 13 13:26:09.650: INFO: update-demo-kitten-vcsvj is verified up and running Mar 13 13:26:09.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vx7kd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5704' Mar 13 13:26:09.707: INFO: stderr: "" Mar 13 13:26:09.707: INFO: stdout: "true" Mar 13 13:26:09.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vx7kd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5704' Mar 13 13:26:09.763: INFO: stderr: "" Mar 13 13:26:09.763: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 13 13:26:09.763: INFO: validating pod update-demo-kitten-vx7kd Mar 13 13:26:09.765: INFO: got data: { "image": "kitten.jpg" } Mar 13 13:26:09.765: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 13 13:26:09.765: INFO: update-demo-kitten-vx7kd is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:26:09.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5704" for this suite. Mar 13 13:26:31.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:26:31.821: INFO: namespace kubectl-5704 deletion completed in 22.053996822s • [SLOW TEST:50.571 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:26:31.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8847 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 13 13:26:31.860: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 13 13:26:54.203: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.70 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8847 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:26:54.203: INFO: >>> kubeConfig: /root/.kube/config I0313 13:26:54.227738 6 log.go:172] (0xc0012c2370) (0xc00115b5e0) Create stream I0313 13:26:54.227754 6 log.go:172] (0xc0012c2370) (0xc00115b5e0) Stream added, broadcasting: 1 I0313 13:26:54.228827 6 log.go:172] (0xc0012c2370) Reply frame received for 1 I0313 13:26:54.228848 6 log.go:172] (0xc0012c2370) (0xc00115b720) Create stream I0313 13:26:54.228858 6 log.go:172] (0xc0012c2370) (0xc00115b720) Stream added, broadcasting: 3 I0313 13:26:54.229340 6 log.go:172] (0xc0012c2370) Reply frame received for 3 I0313 13:26:54.229357 6 log.go:172] (0xc0012c2370) (0xc0005c1ea0) Create stream I0313 13:26:54.229363 6 log.go:172] (0xc0012c2370) (0xc0005c1ea0) Stream added, broadcasting: 5 I0313 13:26:54.229828 6 log.go:172] (0xc0012c2370) Reply frame received for 5 I0313 13:26:55.291243 6 log.go:172] (0xc0012c2370) Data frame received for 3 I0313 13:26:55.291259 6 log.go:172] (0xc00115b720) (3) Data frame handling I0313 13:26:55.291265 6 log.go:172] (0xc00115b720) (3) Data frame sent I0313 13:26:55.291270 6 log.go:172] (0xc0012c2370) Data frame received for 3 I0313 13:26:55.291275 6 log.go:172] (0xc00115b720) (3) Data frame handling I0313 13:26:55.291288 6 log.go:172] (0xc0012c2370) Data frame received for 5 I0313 13:26:55.291294 6 log.go:172] (0xc0005c1ea0) (5) Data frame handling I0313 13:26:55.292565 6 log.go:172] (0xc0012c2370) Data frame received for 1 I0313 13:26:55.292577 6 log.go:172] (0xc00115b5e0) (1) Data frame handling I0313 13:26:55.292583 6 log.go:172] (0xc00115b5e0) (1) Data frame sent I0313 13:26:55.292589 6 log.go:172] (0xc0012c2370) (0xc00115b5e0) Stream removed, broadcasting: 1 I0313 13:26:55.292596 6 log.go:172] (0xc0012c2370) Go away received I0313 13:26:55.292669 6 log.go:172] (0xc0012c2370) (0xc00115b5e0) Stream removed, broadcasting: 1 I0313 13:26:55.292681 6 log.go:172] (0xc0012c2370) (0xc00115b720) Stream removed, broadcasting: 3 I0313 13:26:55.292689 6 log.go:172] (0xc0012c2370) (0xc0005c1ea0) Stream removed, broadcasting: 5 Mar 13 13:26:55.292: INFO: Found all expected endpoints: [netserver-0] Mar 13 13:26:55.294: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.205 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8847 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:26:55.294: INFO: >>> kubeConfig: /root/.kube/config I0313 13:26:55.309787 6 log.go:172] (0xc000f5a580) (0xc0030200a0) Create stream I0313 13:26:55.309799 6 log.go:172] (0xc000f5a580) (0xc0030200a0) Stream added, broadcasting: 1 I0313 13:26:55.311173 6 log.go:172] (0xc000f5a580) Reply frame received for 1 I0313 13:26:55.311190 6 log.go:172] (0xc000f5a580) (0xc000fd6000) Create stream I0313 13:26:55.311197 6 log.go:172] (0xc000f5a580) (0xc000fd6000) Stream added, broadcasting: 3 I0313 13:26:55.311617 6 log.go:172] (0xc000f5a580) Reply frame received for 3 I0313 13:26:55.311632 6 log.go:172] (0xc000f5a580) (0xc0030201e0) Create stream I0313 13:26:55.311637 6 log.go:172] (0xc000f5a580) (0xc0030201e0) Stream added, broadcasting: 5 I0313 13:26:55.312071 6 log.go:172] (0xc000f5a580) Reply frame received for 5 I0313 13:26:56.355049 6 log.go:172] (0xc000f5a580) Data frame received for 3 I0313 13:26:56.355073 6 log.go:172] (0xc000fd6000) (3) Data frame handling I0313 13:26:56.355090 6 log.go:172] (0xc000fd6000) (3) Data frame sent I0313 13:26:56.355097 6 log.go:172] (0xc000f5a580) Data frame received for 3 I0313 13:26:56.355105 6 log.go:172] (0xc000fd6000) (3) Data frame handling I0313 13:26:56.355302 6 log.go:172] (0xc000f5a580) Data frame received for 5 I0313 13:26:56.355312 6 log.go:172] (0xc0030201e0) (5) Data frame handling I0313 13:26:56.356743 6 log.go:172] (0xc000f5a580) Data frame received for 1 I0313 13:26:56.356763 6 log.go:172] (0xc0030200a0) (1) Data frame handling I0313 13:26:56.356774 6 log.go:172] (0xc0030200a0) (1) Data frame sent I0313 13:26:56.356789 6 log.go:172] (0xc000f5a580) (0xc0030200a0) Stream removed, broadcasting: 1 I0313 13:26:56.356803 6 log.go:172] (0xc000f5a580) Go away received I0313 13:26:56.356937 6 log.go:172] (0xc000f5a580) (0xc0030200a0) Stream removed, broadcasting: 1 I0313 13:26:56.356960 6 log.go:172] (0xc000f5a580) (0xc000fd6000) Stream removed, broadcasting: 3 I0313 13:26:56.356998 6 log.go:172] (0xc000f5a580) (0xc0030201e0) Stream removed, broadcasting: 5 Mar 13 13:26:56.357: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:26:56.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8847" for this suite. Mar 13 13:27:18.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:27:18.426: INFO: namespace pod-network-test-8847 deletion completed in 22.066362604s • [SLOW TEST:46.605 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:27:18.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-3990a0c3-ee9b-4f69-8bea-295942fa5f6a STEP: Creating a pod to test consume configMaps Mar 13 13:27:18.506: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-42050367-3a8c-4341-aa12-cb7229018d13" in namespace "projected-7167" to be "success or failure" Mar 13 13:27:18.510: INFO: Pod "pod-projected-configmaps-42050367-3a8c-4341-aa12-cb7229018d13": Phase="Pending", Reason="", readiness=false. Elapsed: 3.882387ms Mar 13 13:27:20.513: INFO: Pod "pod-projected-configmaps-42050367-3a8c-4341-aa12-cb7229018d13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006604351s STEP: Saw pod success Mar 13 13:27:20.513: INFO: Pod "pod-projected-configmaps-42050367-3a8c-4341-aa12-cb7229018d13" satisfied condition "success or failure" Mar 13 13:27:20.515: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-42050367-3a8c-4341-aa12-cb7229018d13 container projected-configmap-volume-test: STEP: delete the pod Mar 13 13:27:20.535: INFO: Waiting for pod pod-projected-configmaps-42050367-3a8c-4341-aa12-cb7229018d13 to disappear Mar 13 13:27:20.539: INFO: Pod pod-projected-configmaps-42050367-3a8c-4341-aa12-cb7229018d13 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:27:20.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7167" for this suite. Mar 13 13:27:26.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:27:26.634: INFO: namespace projected-7167 deletion completed in 6.092440559s • [SLOW TEST:8.208 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:27:26.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 13:27:26.685: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06b3350c-557f-470d-8ec8-2a4230dc891a" in namespace "downward-api-5808" to be "success or failure" Mar 13 13:27:26.718: INFO: Pod "downwardapi-volume-06b3350c-557f-470d-8ec8-2a4230dc891a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.584791ms Mar 13 13:27:28.722: INFO: Pod "downwardapi-volume-06b3350c-557f-470d-8ec8-2a4230dc891a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.037063146s STEP: Saw pod success Mar 13 13:27:28.722: INFO: Pod "downwardapi-volume-06b3350c-557f-470d-8ec8-2a4230dc891a" satisfied condition "success or failure" Mar 13 13:27:28.725: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-06b3350c-557f-470d-8ec8-2a4230dc891a container client-container: STEP: delete the pod Mar 13 13:27:28.759: INFO: Waiting for pod downwardapi-volume-06b3350c-557f-470d-8ec8-2a4230dc891a to disappear Mar 13 13:27:28.777: INFO: Pod downwardapi-volume-06b3350c-557f-470d-8ec8-2a4230dc891a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:27:28.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5808" for this suite. Mar 13 13:27:34.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:27:34.842: INFO: namespace downward-api-5808 deletion completed in 6.06089964s • [SLOW TEST:8.208 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:27:34.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 13 13:27:34.940: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 13 13:27:39.945: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:27:40.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2555" for this suite. Mar 13 13:27:46.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:27:47.072: INFO: namespace replication-controller-2555 deletion completed in 6.099452228s • [SLOW TEST:12.230 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:27:47.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Mar 13 13:27:47.130: INFO: Waiting up to 5m0s for pod "client-containers-88a8df91-21b2-485b-a1cd-40ca284b96a0" in namespace "containers-6928" to be "success or failure" Mar 13 13:27:47.147: INFO: Pod "client-containers-88a8df91-21b2-485b-a1cd-40ca284b96a0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.296842ms Mar 13 13:27:49.151: INFO: Pod "client-containers-88a8df91-21b2-485b-a1cd-40ca284b96a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021149217s STEP: Saw pod success Mar 13 13:27:49.151: INFO: Pod "client-containers-88a8df91-21b2-485b-a1cd-40ca284b96a0" satisfied condition "success or failure" Mar 13 13:27:49.154: INFO: Trying to get logs from node iruya-worker2 pod client-containers-88a8df91-21b2-485b-a1cd-40ca284b96a0 container test-container: STEP: delete the pod Mar 13 13:27:49.198: INFO: Waiting for pod client-containers-88a8df91-21b2-485b-a1cd-40ca284b96a0 to disappear Mar 13 13:27:49.205: INFO: Pod client-containers-88a8df91-21b2-485b-a1cd-40ca284b96a0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:27:49.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6928" for this suite. Mar 13 13:27:55.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:27:55.267: INFO: namespace containers-6928 deletion completed in 6.057746047s • [SLOW TEST:8.194 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:27:55.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 13:27:55.327: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f77acf87-02a3-41fc-b01d-2fcd42b54945" in namespace "projected-1345" to be "success or failure" Mar 13 13:27:55.376: INFO: Pod "downwardapi-volume-f77acf87-02a3-41fc-b01d-2fcd42b54945": Phase="Pending", Reason="", readiness=false. Elapsed: 49.17346ms Mar 13 13:27:57.380: INFO: Pod "downwardapi-volume-f77acf87-02a3-41fc-b01d-2fcd42b54945": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053039374s Mar 13 13:27:59.384: INFO: Pod "downwardapi-volume-f77acf87-02a3-41fc-b01d-2fcd42b54945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056819527s STEP: Saw pod success Mar 13 13:27:59.384: INFO: Pod "downwardapi-volume-f77acf87-02a3-41fc-b01d-2fcd42b54945" satisfied condition "success or failure" Mar 13 13:27:59.386: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f77acf87-02a3-41fc-b01d-2fcd42b54945 container client-container: STEP: delete the pod Mar 13 13:27:59.414: INFO: Waiting for pod downwardapi-volume-f77acf87-02a3-41fc-b01d-2fcd42b54945 to disappear Mar 13 13:27:59.427: INFO: Pod downwardapi-volume-f77acf87-02a3-41fc-b01d-2fcd42b54945 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:27:59.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1345" for this suite. Mar 13 13:28:05.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:28:05.540: INFO: namespace projected-1345 deletion completed in 6.110469101s • [SLOW TEST:10.273 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:28:05.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 13 13:28:05.573: INFO: Waiting up to 5m0s for pod "pod-47a26e71-b883-4b7c-a03a-5fdb2e4c9057" in namespace "emptydir-1535" to be "success or failure" Mar 13 13:28:05.616: INFO: Pod "pod-47a26e71-b883-4b7c-a03a-5fdb2e4c9057": Phase="Pending", Reason="", readiness=false. Elapsed: 43.441525ms Mar 13 13:28:07.619: INFO: Pod "pod-47a26e71-b883-4b7c-a03a-5fdb2e4c9057": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.046775274s STEP: Saw pod success Mar 13 13:28:07.619: INFO: Pod "pod-47a26e71-b883-4b7c-a03a-5fdb2e4c9057" satisfied condition "success or failure" Mar 13 13:28:07.622: INFO: Trying to get logs from node iruya-worker2 pod pod-47a26e71-b883-4b7c-a03a-5fdb2e4c9057 container test-container: STEP: delete the pod Mar 13 13:28:07.658: INFO: Waiting for pod pod-47a26e71-b883-4b7c-a03a-5fdb2e4c9057 to disappear Mar 13 13:28:07.682: INFO: Pod pod-47a26e71-b883-4b7c-a03a-5fdb2e4c9057 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:28:07.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1535" for this suite. Mar 13 13:28:13.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:28:13.753: INFO: namespace emptydir-1535 deletion completed in 6.068056846s • [SLOW TEST:8.213 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:28:13.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4400.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4400.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4400.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 13 13:28:17.833: INFO: DNS probes using dns-test-363ccf4f-0d2d-4068-b3fc-28a0f1fcf15e succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4400.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4400.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4400.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 13 13:28:21.966: INFO: File wheezy_udp@dns-test-service-3.dns-4400.svc.cluster.local from pod dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 13:28:21.970: INFO: File jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local from pod dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 13:28:21.970: INFO: Lookups using dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f failed for: [wheezy_udp@dns-test-service-3.dns-4400.svc.cluster.local jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local] Mar 13 13:28:26.975: INFO: File wheezy_udp@dns-test-service-3.dns-4400.svc.cluster.local from pod dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 13:28:26.978: INFO: File jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local from pod dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 13:28:26.978: INFO: Lookups using dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f failed for: [wheezy_udp@dns-test-service-3.dns-4400.svc.cluster.local jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local] Mar 13 13:28:31.975: INFO: File wheezy_udp@dns-test-service-3.dns-4400.svc.cluster.local from pod dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 13:28:31.978: INFO: File jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local from pod dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 13:28:31.978: INFO: Lookups using dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f failed for: [wheezy_udp@dns-test-service-3.dns-4400.svc.cluster.local jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local] Mar 13 13:28:36.975: INFO: File wheezy_udp@dns-test-service-3.dns-4400.svc.cluster.local from pod dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 13:28:36.978: INFO: File jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local from pod dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 13:28:36.978: INFO: Lookups using dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f failed for: [wheezy_udp@dns-test-service-3.dns-4400.svc.cluster.local jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local] Mar 13 13:28:41.975: INFO: File wheezy_udp@dns-test-service-3.dns-4400.svc.cluster.local from pod dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 13:28:41.978: INFO: File jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local from pod dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 13:28:41.978: INFO: Lookups using dns-4400/dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f failed for: [wheezy_udp@dns-test-service-3.dns-4400.svc.cluster.local jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local] Mar 13 13:28:46.977: INFO: DNS probes using dns-test-2443a225-7c15-492a-b8a4-6ea9c3ff5e3f succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4400.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4400.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4400.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 13 13:28:51.187: INFO: File jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local from pod dns-4400/dns-test-0b403c48-7e54-4c52-a4b4-0a4b02239458 contains '' instead of '10.102.43.89' Mar 13 13:28:51.187: INFO: Lookups using dns-4400/dns-test-0b403c48-7e54-4c52-a4b4-0a4b02239458 failed for: [jessie_udp@dns-test-service-3.dns-4400.svc.cluster.local] Mar 13 13:28:56.195: INFO: DNS probes using dns-test-0b403c48-7e54-4c52-a4b4-0a4b02239458 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:28:56.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4400" for this suite. Mar 13 13:29:02.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:29:02.371: INFO: namespace dns-4400 deletion completed in 6.079728387s • [SLOW TEST:48.618 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:29:02.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9178 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 13 13:29:02.412: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 13 13:29:20.575: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.78:8080/dial?request=hostName&protocol=udp&host=10.244.2.77&port=8081&tries=1'] Namespace:pod-network-test-9178 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:29:20.575: INFO: >>> kubeConfig: /root/.kube/config I0313 13:29:20.595269 6 log.go:172] (0xc000dc66e0) (0xc003020f00) Create stream I0313 13:29:20.595291 6 log.go:172] (0xc000dc66e0) (0xc003020f00) Stream added, broadcasting: 1 I0313 13:29:20.596997 6 log.go:172] (0xc000dc66e0) Reply frame received for 1 I0313 13:29:20.597026 6 log.go:172] (0xc000dc66e0) (0xc0004f0b40) Create stream I0313 13:29:20.597034 6 log.go:172] (0xc000dc66e0) (0xc0004f0b40) Stream added, broadcasting: 3 I0313 13:29:20.597666 6 log.go:172] (0xc000dc66e0) Reply frame received for 3 I0313 13:29:20.597688 6 log.go:172] (0xc000dc66e0) (0xc0027e4d20) Create stream I0313 13:29:20.597694 6 log.go:172] (0xc000dc66e0) (0xc0027e4d20) Stream added, broadcasting: 5 I0313 13:29:20.598349 6 log.go:172] (0xc000dc66e0) Reply frame received for 5 I0313 13:29:20.659889 6 log.go:172] (0xc000dc66e0) Data frame received for 3 I0313 13:29:20.659920 6 log.go:172] (0xc0004f0b40) (3) Data frame handling I0313 13:29:20.659942 6 log.go:172] (0xc0004f0b40) (3) Data frame sent I0313 13:29:20.660548 6 log.go:172] (0xc000dc66e0) Data frame received for 3 I0313 13:29:20.660576 6 log.go:172] (0xc0004f0b40) (3) Data frame handling I0313 13:29:20.660601 6 log.go:172] (0xc000dc66e0) Data frame received for 5 I0313 13:29:20.660617 6 log.go:172] (0xc0027e4d20) (5) Data frame handling I0313 13:29:20.662014 6 log.go:172] (0xc000dc66e0) Data frame received for 1 I0313 13:29:20.662045 6 log.go:172] (0xc003020f00) (1) Data frame handling I0313 13:29:20.662058 6 log.go:172] (0xc003020f00) (1) Data frame sent I0313 13:29:20.662075 6 log.go:172] (0xc000dc66e0) (0xc003020f00) Stream removed, broadcasting: 1 I0313 13:29:20.662110 6 log.go:172] (0xc000dc66e0) Go away received I0313 13:29:20.662202 6 log.go:172] (0xc000dc66e0) (0xc003020f00) Stream removed, broadcasting: 1 I0313 13:29:20.662220 6 log.go:172] (0xc000dc66e0) (0xc0004f0b40) Stream removed, broadcasting: 3 I0313 13:29:20.662226 6 log.go:172] (0xc000dc66e0) (0xc0027e4d20) Stream removed, broadcasting: 5 Mar 13 13:29:20.662: INFO: Waiting for endpoints: map[] Mar 13 13:29:20.664: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.78:8080/dial?request=hostName&protocol=udp&host=10.244.1.211&port=8081&tries=1'] Namespace:pod-network-test-9178 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:29:20.665: INFO: >>> kubeConfig: /root/.kube/config I0313 13:29:20.691059 6 log.go:172] (0xc00060c840) (0xc002f40280) Create stream I0313 13:29:20.691088 6 log.go:172] (0xc00060c840) (0xc002f40280) Stream added, broadcasting: 1 I0313 13:29:20.692940 6 log.go:172] (0xc00060c840) Reply frame received for 1 I0313 13:29:20.692967 6 log.go:172] (0xc00060c840) (0xc0027e5040) Create stream I0313 13:29:20.692976 6 log.go:172] (0xc00060c840) (0xc0027e5040) Stream added, broadcasting: 3 I0313 13:29:20.693699 6 log.go:172] (0xc00060c840) Reply frame received for 3 I0313 13:29:20.693733 6 log.go:172] (0xc00060c840) (0xc0027e50e0) Create stream I0313 13:29:20.693743 6 log.go:172] (0xc00060c840) (0xc0027e50e0) Stream added, broadcasting: 5 I0313 13:29:20.694590 6 log.go:172] (0xc00060c840) Reply frame received for 5 I0313 13:29:20.755674 6 log.go:172] (0xc00060c840) Data frame received for 3 I0313 13:29:20.755699 6 log.go:172] (0xc0027e5040) (3) Data frame handling I0313 13:29:20.755717 6 log.go:172] (0xc0027e5040) (3) Data frame sent I0313 13:29:20.756077 6 log.go:172] (0xc00060c840) Data frame received for 3 I0313 13:29:20.756096 6 log.go:172] (0xc0027e5040) (3) Data frame handling I0313 13:29:20.756470 6 log.go:172] (0xc00060c840) Data frame received for 5 I0313 13:29:20.756485 6 log.go:172] (0xc0027e50e0) (5) Data frame handling I0313 13:29:20.757645 6 log.go:172] (0xc00060c840) Data frame received for 1 I0313 13:29:20.757662 6 log.go:172] (0xc002f40280) (1) Data frame handling I0313 13:29:20.757671 6 log.go:172] (0xc002f40280) (1) Data frame sent I0313 13:29:20.757683 6 log.go:172] (0xc00060c840) (0xc002f40280) Stream removed, broadcasting: 1 I0313 13:29:20.757700 6 log.go:172] (0xc00060c840) Go away received I0313 13:29:20.757815 6 log.go:172] (0xc00060c840) (0xc002f40280) Stream removed, broadcasting: 1 I0313 13:29:20.757832 6 log.go:172] (0xc00060c840) (0xc0027e5040) Stream removed, broadcasting: 3 I0313 13:29:20.757845 6 log.go:172] (0xc00060c840) (0xc0027e50e0) Stream removed, broadcasting: 5 Mar 13 13:29:20.757: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:29:20.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9178" for this suite. Mar 13 13:29:34.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:29:34.851: INFO: namespace pod-network-test-9178 deletion completed in 14.090683268s • [SLOW TEST:32.479 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:29:34.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:29:34.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8062" for this suite. Mar 13 13:29:57.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:29:57.099: INFO: namespace pods-8062 deletion completed in 22.106145913s • [SLOW TEST:22.247 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:29:57.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a4e018ef-56ac-4806-b794-7268bcd8e8ae STEP: Creating a pod to test consume configMaps Mar 13 13:29:57.154: INFO: Waiting up to 5m0s for pod "pod-configmaps-90468079-a15d-4870-aca4-021c356cc6f3" in namespace "configmap-5880" to be "success or failure" Mar 13 13:29:57.174: INFO: Pod "pod-configmaps-90468079-a15d-4870-aca4-021c356cc6f3": Phase="Pending", Reason="", readiness=false. Elapsed: 20.389225ms Mar 13 13:29:59.178: INFO: Pod "pod-configmaps-90468079-a15d-4870-aca4-021c356cc6f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024383506s STEP: Saw pod success Mar 13 13:29:59.178: INFO: Pod "pod-configmaps-90468079-a15d-4870-aca4-021c356cc6f3" satisfied condition "success or failure" Mar 13 13:29:59.182: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-90468079-a15d-4870-aca4-021c356cc6f3 container configmap-volume-test: STEP: delete the pod Mar 13 13:29:59.209: INFO: Waiting for pod pod-configmaps-90468079-a15d-4870-aca4-021c356cc6f3 to disappear Mar 13 13:29:59.213: INFO: Pod pod-configmaps-90468079-a15d-4870-aca4-021c356cc6f3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:29:59.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5880" for this suite. Mar 13 13:30:05.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:30:05.312: INFO: namespace configmap-5880 deletion completed in 6.095699294s • [SLOW TEST:8.212 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:30:05.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Mar 13 13:30:05.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5277' Mar 13 13:30:05.630: INFO: stderr: "" Mar 13 13:30:05.630: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Mar 13 13:30:06.634: INFO: Selector matched 1 pods for map[app:redis] Mar 13 13:30:06.634: INFO: Found 0 / 1 Mar 13 13:30:07.634: INFO: Selector matched 1 pods for map[app:redis] Mar 13 13:30:07.634: INFO: Found 1 / 1 Mar 13 13:30:07.634: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 13 13:30:07.637: INFO: Selector matched 1 pods for map[app:redis] Mar 13 13:30:07.637: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 13 13:30:07.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hg4gh redis-master --namespace=kubectl-5277' Mar 13 13:30:07.745: INFO: stderr: "" Mar 13 13:30:07.745: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 13 Mar 13:30:06.767 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 13 Mar 13:30:06.767 # Server started, Redis version 3.2.12\n1:M 13 Mar 13:30:06.767 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 13 Mar 13:30:06.767 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 13 13:30:07.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hg4gh redis-master --namespace=kubectl-5277 --tail=1' Mar 13 13:30:07.850: INFO: stderr: "" Mar 13 13:30:07.850: INFO: stdout: "1:M 13 Mar 13:30:06.767 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 13 13:30:07.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hg4gh redis-master --namespace=kubectl-5277 --limit-bytes=1' Mar 13 13:30:07.951: INFO: stderr: "" Mar 13 13:30:07.951: INFO: stdout: " " STEP: exposing timestamps Mar 13 13:30:07.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hg4gh redis-master --namespace=kubectl-5277 --tail=1 --timestamps' Mar 13 13:30:08.032: INFO: stderr: "" Mar 13 13:30:08.032: INFO: stdout: "2020-03-13T13:30:06.767320807Z 1:M 13 Mar 13:30:06.767 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 13 13:30:10.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hg4gh redis-master --namespace=kubectl-5277 --since=1s' Mar 13 13:30:10.618: INFO: stderr: "" Mar 13 13:30:10.618: INFO: stdout: "" Mar 13 13:30:10.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hg4gh redis-master --namespace=kubectl-5277 --since=24h' Mar 13 13:30:10.694: INFO: stderr: "" Mar 13 13:30:10.694: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 13 Mar 13:30:06.767 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 13 Mar 13:30:06.767 # Server started, Redis version 3.2.12\n1:M 13 Mar 13:30:06.767 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 13 Mar 13:30:06.767 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Mar 13 13:30:10.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5277' Mar 13 13:30:10.784: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 13 13:30:10.784: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 13 13:30:10.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-5277' Mar 13 13:30:10.856: INFO: stderr: "No resources found.\n" Mar 13 13:30:10.856: INFO: stdout: "" Mar 13 13:30:10.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-5277 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 13 13:30:10.916: INFO: stderr: "" Mar 13 13:30:10.916: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:30:10.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5277" for this suite. Mar 13 13:30:16.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:30:17.013: INFO: namespace kubectl-5277 deletion completed in 6.09457551s • [SLOW TEST:11.701 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:30:17.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2837 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 13 13:30:17.055: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 13 13:30:37.153: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.213:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2837 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:30:37.153: INFO: >>> kubeConfig: /root/.kube/config I0313 13:30:37.192190 6 log.go:172] (0xc0018b0210) (0xc002f41ae0) Create stream I0313 13:30:37.192218 6 log.go:172] (0xc0018b0210) (0xc002f41ae0) Stream added, broadcasting: 1 I0313 13:30:37.196201 6 log.go:172] (0xc0018b0210) Reply frame received for 1 I0313 13:30:37.196244 6 log.go:172] (0xc0018b0210) (0xc00150f900) Create stream I0313 13:30:37.196255 6 log.go:172] (0xc0018b0210) (0xc00150f900) Stream added, broadcasting: 3 I0313 13:30:37.201037 6 log.go:172] (0xc0018b0210) Reply frame received for 3 I0313 13:30:37.201076 6 log.go:172] (0xc0018b0210) (0xc002f41b80) Create stream I0313 13:30:37.201088 6 log.go:172] (0xc0018b0210) (0xc002f41b80) Stream added, broadcasting: 5 I0313 13:30:37.202237 6 log.go:172] (0xc0018b0210) Reply frame received for 5 I0313 13:30:37.270434 6 log.go:172] (0xc0018b0210) Data frame received for 3 I0313 13:30:37.270462 6 log.go:172] (0xc00150f900) (3) Data frame handling I0313 13:30:37.270482 6 log.go:172] (0xc00150f900) (3) Data frame sent I0313 13:30:37.270832 6 log.go:172] (0xc0018b0210) Data frame received for 5 I0313 13:30:37.270871 6 log.go:172] (0xc002f41b80) (5) Data frame handling I0313 13:30:37.270895 6 log.go:172] (0xc0018b0210) Data frame received for 3 I0313 13:30:37.270908 6 log.go:172] (0xc00150f900) (3) Data frame handling I0313 13:30:37.272455 6 log.go:172] (0xc0018b0210) Data frame received for 1 I0313 13:30:37.272468 6 log.go:172] (0xc002f41ae0) (1) Data frame handling I0313 13:30:37.272477 6 log.go:172] (0xc002f41ae0) (1) Data frame sent I0313 13:30:37.272489 6 log.go:172] (0xc0018b0210) (0xc002f41ae0) Stream removed, broadcasting: 1 I0313 13:30:37.272504 6 log.go:172] (0xc0018b0210) Go away received I0313 13:30:37.272772 6 log.go:172] (0xc0018b0210) (0xc002f41ae0) Stream removed, broadcasting: 1 I0313 13:30:37.272793 6 log.go:172] (0xc0018b0210) (0xc00150f900) Stream removed, broadcasting: 3 I0313 13:30:37.272804 6 log.go:172] (0xc0018b0210) (0xc002f41b80) Stream removed, broadcasting: 5 Mar 13 13:30:37.272: INFO: Found all expected endpoints: [netserver-0] Mar 13 13:30:37.277: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.81:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2837 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:30:37.277: INFO: >>> kubeConfig: /root/.kube/config I0313 13:30:37.301176 6 log.go:172] (0xc000dfe8f0) (0xc0004f0140) Create stream I0313 13:30:37.301195 6 log.go:172] (0xc000dfe8f0) (0xc0004f0140) Stream added, broadcasting: 1 I0313 13:30:37.302454 6 log.go:172] (0xc000dfe8f0) Reply frame received for 1 I0313 13:30:37.302488 6 log.go:172] (0xc000dfe8f0) (0xc0005c01e0) Create stream I0313 13:30:37.302499 6 log.go:172] (0xc000dfe8f0) (0xc0005c01e0) Stream added, broadcasting: 3 I0313 13:30:37.303329 6 log.go:172] (0xc000dfe8f0) Reply frame received for 3 I0313 13:30:37.303374 6 log.go:172] (0xc000dfe8f0) (0xc001872dc0) Create stream I0313 13:30:37.303385 6 log.go:172] (0xc000dfe8f0) (0xc001872dc0) Stream added, broadcasting: 5 I0313 13:30:37.304202 6 log.go:172] (0xc000dfe8f0) Reply frame received for 5 I0313 13:30:37.384594 6 log.go:172] (0xc000dfe8f0) Data frame received for 3 I0313 13:30:37.384629 6 log.go:172] (0xc0005c01e0) (3) Data frame handling I0313 13:30:37.384645 6 log.go:172] (0xc0005c01e0) (3) Data frame sent I0313 13:30:37.384660 6 log.go:172] (0xc000dfe8f0) Data frame received for 3 I0313 13:30:37.384673 6 log.go:172] (0xc0005c01e0) (3) Data frame handling I0313 13:30:37.384699 6 log.go:172] (0xc000dfe8f0) Data frame received for 5 I0313 13:30:37.384719 6 log.go:172] (0xc001872dc0) (5) Data frame handling I0313 13:30:37.386620 6 log.go:172] (0xc000dfe8f0) Data frame received for 1 I0313 13:30:37.386639 6 log.go:172] (0xc0004f0140) (1) Data frame handling I0313 13:30:37.386649 6 log.go:172] (0xc0004f0140) (1) Data frame sent I0313 13:30:37.386663 6 log.go:172] (0xc000dfe8f0) (0xc0004f0140) Stream removed, broadcasting: 1 I0313 13:30:37.386677 6 log.go:172] (0xc000dfe8f0) Go away received I0313 13:30:37.386789 6 log.go:172] (0xc000dfe8f0) (0xc0004f0140) Stream removed, broadcasting: 1 I0313 13:30:37.386811 6 log.go:172] (0xc000dfe8f0) (0xc0005c01e0) Stream removed, broadcasting: 3 I0313 13:30:37.386826 6 log.go:172] (0xc000dfe8f0) (0xc001872dc0) Stream removed, broadcasting: 5 Mar 13 13:30:37.386: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:30:37.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2837" for this suite. Mar 13 13:30:59.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:30:59.474: INFO: namespace pod-network-test-2837 deletion completed in 22.083887554s • [SLOW TEST:42.461 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:30:59.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 13:30:59.523: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5667729-03c5-4edc-a36f-df0f2b2c6c9e" in namespace "projected-2181" to be "success or failure" Mar 13 13:30:59.535: INFO: Pod "downwardapi-volume-b5667729-03c5-4edc-a36f-df0f2b2c6c9e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.246818ms Mar 13 13:31:01.538: INFO: Pod "downwardapi-volume-b5667729-03c5-4edc-a36f-df0f2b2c6c9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015394378s STEP: Saw pod success Mar 13 13:31:01.538: INFO: Pod "downwardapi-volume-b5667729-03c5-4edc-a36f-df0f2b2c6c9e" satisfied condition "success or failure" Mar 13 13:31:01.540: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b5667729-03c5-4edc-a36f-df0f2b2c6c9e container client-container: STEP: delete the pod Mar 13 13:31:01.570: INFO: Waiting for pod downwardapi-volume-b5667729-03c5-4edc-a36f-df0f2b2c6c9e to disappear Mar 13 13:31:01.594: INFO: Pod downwardapi-volume-b5667729-03c5-4edc-a36f-df0f2b2c6c9e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:31:01.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2181" for this suite. Mar 13 13:31:07.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:31:07.685: INFO: namespace projected-2181 deletion completed in 6.086784054s • [SLOW TEST:8.211 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:31:07.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-nqs28 in namespace proxy-811 I0313 13:31:07.771298 6 runners.go:180] Created replication controller with name: proxy-service-nqs28, namespace: proxy-811, replica count: 1 I0313 13:31:08.821699 6 runners.go:180] proxy-service-nqs28 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0313 13:31:09.821937 6 runners.go:180] proxy-service-nqs28 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0313 13:31:10.822216 6 runners.go:180] proxy-service-nqs28 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0313 13:31:11.822441 6 runners.go:180] proxy-service-nqs28 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 13 13:31:11.825: INFO: setup took 4.105116695s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 13 13:31:11.836: INFO: (0) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 10.386054ms) Mar 13 13:31:11.836: INFO: (0) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 10.290437ms) Mar 13 13:31:11.836: INFO: (0) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:1080/proxy/: testt... (200; 10.376582ms) Mar 13 13:31:11.836: INFO: (0) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx/proxy/: test (200; 10.385482ms) Mar 13 13:31:11.836: INFO: (0) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 10.452726ms) Mar 13 13:31:11.837: INFO: (0) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 10.963681ms) Mar 13 13:31:11.838: INFO: (0) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: testtest (200; 5.872129ms) Mar 13 13:31:11.853: INFO: (1) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 6.099539ms) Mar 13 13:31:11.853: INFO: (1) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 6.167844ms) Mar 13 13:31:11.853: INFO: (1) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:1080/proxy/: t... (200; 6.23676ms) Mar 13 13:31:11.853: INFO: (1) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 6.272276ms) Mar 13 13:31:11.853: INFO: (1) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 6.117601ms) Mar 13 13:31:11.853: INFO: (1) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 6.350612ms) Mar 13 13:31:11.853: INFO: (1) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 6.262181ms) Mar 13 13:31:11.853: INFO: (1) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 6.429137ms) Mar 13 13:31:11.854: INFO: (1) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 6.902195ms) Mar 13 13:31:11.859: INFO: (2) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:462/proxy/: tls qux (200; 5.296674ms) Mar 13 13:31:11.860: INFO: (2) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 5.918575ms) Mar 13 13:31:11.860: INFO: (2) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:460/proxy/: tls baz (200; 5.894768ms) Mar 13 13:31:11.860: INFO: (2) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:1080/proxy/: testtest (200; 6.147217ms) Mar 13 13:31:11.860: INFO: (2) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: t... (200; 7.559623ms) Mar 13 13:31:11.862: INFO: (2) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 7.726655ms) Mar 13 13:31:11.862: INFO: (2) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 7.709276ms) Mar 13 13:31:11.862: INFO: (2) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 7.81304ms) Mar 13 13:31:11.862: INFO: (2) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 7.852797ms) Mar 13 13:31:11.862: INFO: (2) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 8.209996ms) Mar 13 13:31:11.867: INFO: (2) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 13.282859ms) Mar 13 13:31:11.867: INFO: (2) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 13.456435ms) Mar 13 13:31:11.874: INFO: (3) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 6.476179ms) Mar 13 13:31:11.874: INFO: (3) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:1080/proxy/: t... (200; 6.62269ms) Mar 13 13:31:11.874: INFO: (3) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 6.699509ms) Mar 13 13:31:11.874: INFO: (3) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 6.967306ms) Mar 13 13:31:11.874: INFO: (3) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:1080/proxy/: testtest (200; 7.110749ms) Mar 13 13:31:11.875: INFO: (3) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 7.291573ms) Mar 13 13:31:11.874: INFO: (3) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:462/proxy/: tls qux (200; 7.096717ms) Mar 13 13:31:11.875: INFO: (3) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 7.216888ms) Mar 13 13:31:11.875: INFO: (3) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 7.332556ms) Mar 13 13:31:11.875: INFO: (3) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 7.546041ms) Mar 13 13:31:11.875: INFO: (3) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:460/proxy/: tls baz (200; 7.668049ms) Mar 13 13:31:11.875: INFO: (3) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 7.689898ms) Mar 13 13:31:11.876: INFO: (3) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: testt... (200; 5.328254ms) Mar 13 13:31:11.881: INFO: (4) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:462/proxy/: tls qux (200; 5.319398ms) Mar 13 13:31:11.881: INFO: (4) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 5.231682ms) Mar 13 13:31:11.882: INFO: (4) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 5.982275ms) Mar 13 13:31:11.882: INFO: (4) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 5.979669ms) Mar 13 13:31:11.882: INFO: (4) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 6.236822ms) Mar 13 13:31:11.882: INFO: (4) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 6.259894ms) Mar 13 13:31:11.882: INFO: (4) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 6.369739ms) Mar 13 13:31:11.882: INFO: (4) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx/proxy/: test (200; 6.425927ms) Mar 13 13:31:11.882: INFO: (4) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 6.588054ms) Mar 13 13:31:11.885: INFO: (5) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 3.044798ms) Mar 13 13:31:11.886: INFO: (5) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: testt... (200; 4.498246ms) Mar 13 13:31:11.893: INFO: (5) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 10.435887ms) Mar 13 13:31:11.893: INFO: (5) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 10.838981ms) Mar 13 13:31:11.893: INFO: (5) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 10.940336ms) Mar 13 13:31:11.893: INFO: (5) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 10.919879ms) Mar 13 13:31:11.894: INFO: (5) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx/proxy/: test (200; 11.419135ms) Mar 13 13:31:11.894: INFO: (5) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 11.570976ms) Mar 13 13:31:11.894: INFO: (5) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:460/proxy/: tls baz (200; 11.698345ms) Mar 13 13:31:11.894: INFO: (5) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 11.885149ms) Mar 13 13:31:11.894: INFO: (5) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 11.826551ms) Mar 13 13:31:11.895: INFO: (5) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 12.208345ms) Mar 13 13:31:11.896: INFO: (5) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 13.145501ms) Mar 13 13:31:11.900: INFO: (6) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:462/proxy/: tls qux (200; 4.038254ms) Mar 13 13:31:11.900: INFO: (6) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 3.994114ms) Mar 13 13:31:11.900: INFO: (6) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: testtest (200; 5.202653ms) Mar 13 13:31:11.901: INFO: (6) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:1080/proxy/: t... (200; 5.245928ms) Mar 13 13:31:11.901: INFO: (6) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 5.410109ms) Mar 13 13:31:11.901: INFO: (6) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 5.558422ms) Mar 13 13:31:11.901: INFO: (6) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 5.439436ms) Mar 13 13:31:11.901: INFO: (6) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 5.712815ms) Mar 13 13:31:11.902: INFO: (6) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 5.892461ms) Mar 13 13:31:11.902: INFO: (6) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 6.185965ms) Mar 13 13:31:11.904: INFO: (7) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:1080/proxy/: t... (200; 2.16379ms) Mar 13 13:31:11.905: INFO: (7) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 2.994903ms) Mar 13 13:31:11.905: INFO: (7) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:460/proxy/: tls baz (200; 3.126167ms) Mar 13 13:31:11.906: INFO: (7) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 4.373488ms) Mar 13 13:31:11.907: INFO: (7) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 4.757117ms) Mar 13 13:31:11.907: INFO: (7) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 4.791233ms) Mar 13 13:31:11.907: INFO: (7) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 5.005249ms) Mar 13 13:31:11.907: INFO: (7) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx/proxy/: test (200; 5.018086ms) Mar 13 13:31:11.907: INFO: (7) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 5.067188ms) Mar 13 13:31:11.907: INFO: (7) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 5.070324ms) Mar 13 13:31:11.907: INFO: (7) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:462/proxy/: tls qux (200; 5.043555ms) Mar 13 13:31:11.907: INFO: (7) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: testtest (200; 2.219445ms) Mar 13 13:31:11.910: INFO: (8) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:1080/proxy/: t... (200; 2.510536ms) Mar 13 13:31:11.911: INFO: (8) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 3.599478ms) Mar 13 13:31:11.911: INFO: (8) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:1080/proxy/: testt... (200; 3.715972ms) Mar 13 13:31:11.917: INFO: (9) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:462/proxy/: tls qux (200; 3.654519ms) Mar 13 13:31:11.917: INFO: (9) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 3.69454ms) Mar 13 13:31:11.918: INFO: (9) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: testtest (200; 5.448363ms) Mar 13 13:31:11.919: INFO: (9) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 5.567728ms) Mar 13 13:31:11.919: INFO: (9) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 6.218746ms) Mar 13 13:31:11.919: INFO: (9) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 6.241846ms) Mar 13 13:31:11.919: INFO: (9) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 6.2497ms) Mar 13 13:31:11.919: INFO: (9) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 6.300139ms) Mar 13 13:31:11.919: INFO: (9) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 6.298879ms) Mar 13 13:31:11.919: INFO: (9) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 6.295574ms) Mar 13 13:31:11.923: INFO: (10) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:1080/proxy/: t... (200; 3.354619ms) Mar 13 13:31:11.923: INFO: (10) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 3.348933ms) Mar 13 13:31:11.923: INFO: (10) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:462/proxy/: tls qux (200; 3.588867ms) Mar 13 13:31:11.924: INFO: (10) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 4.695436ms) Mar 13 13:31:11.924: INFO: (10) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 4.890946ms) Mar 13 13:31:11.924: INFO: (10) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:460/proxy/: tls baz (200; 4.751886ms) Mar 13 13:31:11.924: INFO: (10) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:1080/proxy/: testtest (200; 4.792549ms) Mar 13 13:31:11.924: INFO: (10) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 4.834349ms) Mar 13 13:31:11.925: INFO: (10) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 5.31683ms) Mar 13 13:31:11.925: INFO: (10) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 5.336363ms) Mar 13 13:31:11.925: INFO: (10) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 5.394339ms) Mar 13 13:31:11.925: INFO: (10) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 5.74912ms) Mar 13 13:31:11.925: INFO: (10) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 5.809028ms) Mar 13 13:31:11.926: INFO: (10) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: t... (200; 2.564552ms) Mar 13 13:31:11.930: INFO: (11) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 3.581626ms) Mar 13 13:31:11.930: INFO: (11) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx/proxy/: test (200; 3.99124ms) Mar 13 13:31:11.931: INFO: (11) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 4.574704ms) Mar 13 13:31:11.931: INFO: (11) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: testtestt... (200; 5.484072ms) Mar 13 13:31:11.938: INFO: (12) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: test (200; 5.50906ms) Mar 13 13:31:11.938: INFO: (12) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 5.530625ms) Mar 13 13:31:11.941: INFO: (13) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:460/proxy/: tls baz (200; 2.639431ms) Mar 13 13:31:11.941: INFO: (13) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:462/proxy/: tls qux (200; 2.89479ms) Mar 13 13:31:11.942: INFO: (13) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: testt... (200; 4.01733ms) Mar 13 13:31:11.942: INFO: (13) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx/proxy/: test (200; 4.026542ms) Mar 13 13:31:11.942: INFO: (13) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 3.977152ms) Mar 13 13:31:11.943: INFO: (13) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 5.082187ms) Mar 13 13:31:11.943: INFO: (13) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 5.181425ms) Mar 13 13:31:11.943: INFO: (13) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 5.27026ms) Mar 13 13:31:11.943: INFO: (13) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 5.268147ms) Mar 13 13:31:11.943: INFO: (13) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 5.285145ms) Mar 13 13:31:11.943: INFO: (13) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 5.355057ms) Mar 13 13:31:11.946: INFO: (14) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:1080/proxy/: testt... (200; 2.379133ms) Mar 13 13:31:11.947: INFO: (14) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:460/proxy/: tls baz (200; 3.526958ms) Mar 13 13:31:11.948: INFO: (14) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: test (200; 4.815566ms) Mar 13 13:31:11.948: INFO: (14) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 4.824846ms) Mar 13 13:31:11.948: INFO: (14) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 4.868178ms) Mar 13 13:31:11.948: INFO: (14) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 4.930193ms) Mar 13 13:31:11.948: INFO: (14) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 4.948065ms) Mar 13 13:31:11.948: INFO: (14) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 4.968568ms) Mar 13 13:31:11.948: INFO: (14) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 5.102327ms) Mar 13 13:31:11.953: INFO: (15) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 3.946404ms) Mar 13 13:31:11.953: INFO: (15) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 4.259137ms) Mar 13 13:31:11.953: INFO: (15) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 3.709712ms) Mar 13 13:31:11.953: INFO: (15) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:1080/proxy/: testt... (200; 3.741483ms) Mar 13 13:31:11.953: INFO: (15) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 4.42123ms) Mar 13 13:31:11.953: INFO: (15) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 4.289387ms) Mar 13 13:31:11.953: INFO: (15) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 4.091104ms) Mar 13 13:31:11.953: INFO: (15) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:462/proxy/: tls qux (200; 4.15675ms) Mar 13 13:31:11.954: INFO: (15) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx/proxy/: test (200; 5.272742ms) Mar 13 13:31:11.954: INFO: (15) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 5.00586ms) Mar 13 13:31:11.954: INFO: (15) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:460/proxy/: tls baz (200; 5.255555ms) Mar 13 13:31:11.954: INFO: (15) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 5.091747ms) Mar 13 13:31:11.954: INFO: (15) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 4.696253ms) Mar 13 13:31:11.954: INFO: (15) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 5.047195ms) Mar 13 13:31:11.954: INFO: (15) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: testt... (200; 4.754933ms) Mar 13 13:31:11.959: INFO: (16) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 4.851727ms) Mar 13 13:31:11.959: INFO: (16) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 4.771215ms) Mar 13 13:31:11.959: INFO: (16) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 4.928576ms) Mar 13 13:31:11.959: INFO: (16) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx/proxy/: test (200; 4.999426ms) Mar 13 13:31:11.959: INFO: (16) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 4.844318ms) Mar 13 13:31:11.960: INFO: (16) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:462/proxy/: tls qux (200; 4.911562ms) Mar 13 13:31:11.960: INFO: (16) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 5.391766ms) Mar 13 13:31:11.960: INFO: (16) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 5.307688ms) Mar 13 13:31:11.960: INFO: (16) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:460/proxy/: tls baz (200; 5.445018ms) Mar 13 13:31:11.960: INFO: (16) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: t... (200; 2.554419ms) Mar 13 13:31:11.963: INFO: (17) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 2.551177ms) Mar 13 13:31:11.963: INFO: (17) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:460/proxy/: tls baz (200; 2.673914ms) Mar 13 13:31:11.964: INFO: (17) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:1080/proxy/: testtest (200; 4.794038ms) Mar 13 13:31:11.965: INFO: (17) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 4.81425ms) Mar 13 13:31:11.965: INFO: (17) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 4.932796ms) Mar 13 13:31:11.965: INFO: (17) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 5.109124ms) Mar 13 13:31:11.966: INFO: (17) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 5.411ms) Mar 13 13:31:11.966: INFO: (17) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 5.394176ms) Mar 13 13:31:11.967: INFO: (17) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 6.52337ms) Mar 13 13:31:11.969: INFO: (18) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 2.656133ms) Mar 13 13:31:11.970: INFO: (18) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 3.118937ms) Mar 13 13:31:11.970: INFO: (18) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:462/proxy/: tls qux (200; 3.048247ms) Mar 13 13:31:11.970: INFO: (18) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 2.97582ms) Mar 13 13:31:11.970: INFO: (18) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:1080/proxy/: testt... (200; 3.259312ms) Mar 13 13:31:11.970: INFO: (18) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: test (200; 3.583052ms) Mar 13 13:31:11.971: INFO: (18) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 4.675659ms) Mar 13 13:31:11.971: INFO: (18) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 4.719665ms) Mar 13 13:31:11.971: INFO: (18) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname2/proxy/: bar (200; 4.791184ms) Mar 13 13:31:11.972: INFO: (18) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 4.744416ms) Mar 13 13:31:11.972: INFO: (18) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 4.769313ms) Mar 13 13:31:11.972: INFO: (18) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 4.797922ms) Mar 13 13:31:11.975: INFO: (19) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 3.34461ms) Mar 13 13:31:11.975: INFO: (19) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:160/proxy/: foo (200; 3.756843ms) Mar 13 13:31:11.976: INFO: (19) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx/proxy/: test (200; 4.231553ms) Mar 13 13:31:11.976: INFO: (19) /api/v1/namespaces/proxy-811/pods/http:proxy-service-nqs28-v7nwx:162/proxy/: bar (200; 4.290302ms) Mar 13 13:31:11.976: INFO: (19) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname1/proxy/: foo (200; 4.366868ms) Mar 13 13:31:11.976: INFO: (19) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:462/proxy/: tls qux (200; 4.425827ms) Mar 13 13:31:11.976: INFO: (19) /api/v1/namespaces/proxy-811/services/http:proxy-service-nqs28:portname1/proxy/: foo (200; 4.67176ms) Mar 13 13:31:11.977: INFO: (19) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname2/proxy/: tls qux (200; 5.585653ms) Mar 13 13:31:11.977: INFO: (19) /api/v1/namespaces/proxy-811/services/https:proxy-service-nqs28:tlsportname1/proxy/: tls baz (200; 5.554163ms) Mar 13 13:31:11.977: INFO: (19) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:460/proxy/: tls baz (200; 5.579574ms) Mar 13 13:31:11.977: INFO: (19) /api/v1/namespaces/proxy-811/pods/proxy-service-nqs28-v7nwx:1080/proxy/: testt... (200; 5.920164ms) Mar 13 13:31:11.978: INFO: (19) /api/v1/namespaces/proxy-811/services/proxy-service-nqs28:portname2/proxy/: bar (200; 6.053634ms) Mar 13 13:31:11.978: INFO: (19) /api/v1/namespaces/proxy-811/pods/https:proxy-service-nqs28-v7nwx:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-hjx4 STEP: Creating a pod to test atomic-volume-subpath Mar 13 13:31:30.698: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hjx4" in namespace "subpath-7242" to be "success or failure" Mar 13 13:31:30.702: INFO: Pod "pod-subpath-test-configmap-hjx4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.378924ms Mar 13 13:31:32.705: INFO: Pod "pod-subpath-test-configmap-hjx4": Phase="Running", Reason="", readiness=true. Elapsed: 2.007022681s Mar 13 13:31:34.707: INFO: Pod "pod-subpath-test-configmap-hjx4": Phase="Running", Reason="", readiness=true. Elapsed: 4.009205499s Mar 13 13:31:36.711: INFO: Pod "pod-subpath-test-configmap-hjx4": Phase="Running", Reason="", readiness=true. Elapsed: 6.013008968s Mar 13 13:31:38.715: INFO: Pod "pod-subpath-test-configmap-hjx4": Phase="Running", Reason="", readiness=true. Elapsed: 8.01696746s Mar 13 13:31:40.719: INFO: Pod "pod-subpath-test-configmap-hjx4": Phase="Running", Reason="", readiness=true. Elapsed: 10.020467488s Mar 13 13:31:42.721: INFO: Pod "pod-subpath-test-configmap-hjx4": Phase="Running", Reason="", readiness=true. Elapsed: 12.022980649s Mar 13 13:31:44.724: INFO: Pod "pod-subpath-test-configmap-hjx4": Phase="Running", Reason="", readiness=true. Elapsed: 14.025955609s Mar 13 13:31:46.728: INFO: Pod "pod-subpath-test-configmap-hjx4": Phase="Running", Reason="", readiness=true. Elapsed: 16.029914245s Mar 13 13:31:48.732: INFO: Pod "pod-subpath-test-configmap-hjx4": Phase="Running", Reason="", readiness=true. Elapsed: 18.033672557s Mar 13 13:31:50.736: INFO: Pod "pod-subpath-test-configmap-hjx4": Phase="Running", Reason="", readiness=true. Elapsed: 20.0372976s Mar 13 13:31:52.754: INFO: Pod "pod-subpath-test-configmap-hjx4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.055857215s STEP: Saw pod success Mar 13 13:31:52.754: INFO: Pod "pod-subpath-test-configmap-hjx4" satisfied condition "success or failure" Mar 13 13:31:52.756: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-hjx4 container test-container-subpath-configmap-hjx4: STEP: delete the pod Mar 13 13:31:52.772: INFO: Waiting for pod pod-subpath-test-configmap-hjx4 to disappear Mar 13 13:31:52.795: INFO: Pod pod-subpath-test-configmap-hjx4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-hjx4 Mar 13 13:31:52.795: INFO: Deleting pod "pod-subpath-test-configmap-hjx4" in namespace "subpath-7242" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:31:52.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7242" for this suite. Mar 13 13:31:58.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:31:58.916: INFO: namespace subpath-7242 deletion completed in 6.11718055s • [SLOW TEST:28.294 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:31:58.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 13:31:58.960: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 13 13:31:58.969: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 13 13:32:03.973: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 13 13:32:03.973: INFO: Creating deployment "test-rolling-update-deployment" Mar 13 13:32:03.979: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 13 13:32:03.996: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 13 13:32:06.002: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 13 13:32:06.003: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 13 13:32:06.008: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-430,SelfLink:/apis/apps/v1/namespaces/deployment-430/deployments/test-rolling-update-deployment,UID:df86ea80-18bd-490c-ac34-3577f7c066b6,ResourceVersion:906993,Generation:1,CreationTimestamp:2020-03-13 13:32:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-13 13:32:04 +0000 UTC 2020-03-13 13:32:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-13 13:32:05 +0000 UTC 2020-03-13 13:32:04 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 13 13:32:06.010: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-430,SelfLink:/apis/apps/v1/namespaces/deployment-430/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:d7f29af5-f637-478a-8422-9d2d09a912b0,ResourceVersion:906982,Generation:1,CreationTimestamp:2020-03-13 13:32:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment df86ea80-18bd-490c-ac34-3577f7c066b6 0xc002ee9be7 0xc002ee9be8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 13 13:32:06.011: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 13 13:32:06.011: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-430,SelfLink:/apis/apps/v1/namespaces/deployment-430/replicasets/test-rolling-update-controller,UID:c81db11c-cd8b-4b9d-a361-3fe3d01eb0a9,ResourceVersion:906991,Generation:2,CreationTimestamp:2020-03-13 13:31:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment df86ea80-18bd-490c-ac34-3577f7c066b6 0xc002ee9b17 0xc002ee9b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 13 13:32:06.013: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-bdft7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-bdft7,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-430,SelfLink:/api/v1/namespaces/deployment-430/pods/test-rolling-update-deployment-79f6b9d75c-bdft7,UID:dd8db548-d5fe-4b2e-87c4-47514cf4884a,ResourceVersion:906980,Generation:0,CreationTimestamp:2020-03-13 13:32:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c d7f29af5-f637-478a-8422-9d2d09a912b0 0xc0033786d7 0xc0033786d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w6gsm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w6gsm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-w6gsm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003378750} {node.kubernetes.io/unreachable Exists NoExecute 0xc003378770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:32:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:32:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:32:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:32:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.1.216,StartTime:2020-03-13 13:32:04 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-13 13:32:05 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://83d858c70100c630e61711deace9b4975e95d877a15377fff1a77a277e9de1e1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:32:06.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-430" for this suite. Mar 13 13:32:12.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:32:12.096: INFO: namespace deployment-430 deletion completed in 6.081289084s • [SLOW TEST:13.178 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:32:12.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 13:32:32.188: INFO: Container started at 2020-03-13 13:32:13 +0000 UTC, pod became ready at 2020-03-13 13:32:31 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:32:32.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3" for this suite. Mar 13 13:32:54.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:32:54.283: INFO: namespace container-probe-3 deletion completed in 22.09035941s • [SLOW TEST:42.186 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:32:54.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3513 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 13 13:32:54.303: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 13 13:33:12.416: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.218:8080/dial?request=hostName&protocol=http&host=10.244.1.217&port=8080&tries=1'] Namespace:pod-network-test-3513 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:33:12.416: INFO: >>> kubeConfig: /root/.kube/config I0313 13:33:12.441020 6 log.go:172] (0xc000fcc2c0) (0xc0002f3d60) Create stream I0313 13:33:12.441046 6 log.go:172] (0xc000fcc2c0) (0xc0002f3d60) Stream added, broadcasting: 1 I0313 13:33:12.442559 6 log.go:172] (0xc000fcc2c0) Reply frame received for 1 I0313 13:33:12.442589 6 log.go:172] (0xc000fcc2c0) (0xc00315e5a0) Create stream I0313 13:33:12.442599 6 log.go:172] (0xc000fcc2c0) (0xc00315e5a0) Stream added, broadcasting: 3 I0313 13:33:12.443409 6 log.go:172] (0xc000fcc2c0) Reply frame received for 3 I0313 13:33:12.443438 6 log.go:172] (0xc000fcc2c0) (0xc00315e640) Create stream I0313 13:33:12.443449 6 log.go:172] (0xc000fcc2c0) (0xc00315e640) Stream added, broadcasting: 5 I0313 13:33:12.444146 6 log.go:172] (0xc000fcc2c0) Reply frame received for 5 I0313 13:33:12.527532 6 log.go:172] (0xc000fcc2c0) Data frame received for 3 I0313 13:33:12.527572 6 log.go:172] (0xc00315e5a0) (3) Data frame handling I0313 13:33:12.527590 6 log.go:172] (0xc00315e5a0) (3) Data frame sent I0313 13:33:12.527840 6 log.go:172] (0xc000fcc2c0) Data frame received for 3 I0313 13:33:12.527897 6 log.go:172] (0xc00315e5a0) (3) Data frame handling I0313 13:33:12.527929 6 log.go:172] (0xc000fcc2c0) Data frame received for 5 I0313 13:33:12.527947 6 log.go:172] (0xc00315e640) (5) Data frame handling I0313 13:33:12.529772 6 log.go:172] (0xc000fcc2c0) Data frame received for 1 I0313 13:33:12.529793 6 log.go:172] (0xc0002f3d60) (1) Data frame handling I0313 13:33:12.529805 6 log.go:172] (0xc0002f3d60) (1) Data frame sent I0313 13:33:12.529822 6 log.go:172] (0xc000fcc2c0) (0xc0002f3d60) Stream removed, broadcasting: 1 I0313 13:33:12.529837 6 log.go:172] (0xc000fcc2c0) Go away received I0313 13:33:12.529943 6 log.go:172] (0xc000fcc2c0) (0xc0002f3d60) Stream removed, broadcasting: 1 I0313 13:33:12.529966 6 log.go:172] (0xc000fcc2c0) (0xc00315e5a0) Stream removed, broadcasting: 3 I0313 13:33:12.529989 6 log.go:172] (0xc000fcc2c0) (0xc00315e640) Stream removed, broadcasting: 5 Mar 13 13:33:12.530: INFO: Waiting for endpoints: map[] Mar 13 13:33:12.533: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.218:8080/dial?request=hostName&protocol=http&host=10.244.2.86&port=8080&tries=1'] Namespace:pod-network-test-3513 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 13:33:12.533: INFO: >>> kubeConfig: /root/.kube/config I0313 13:33:12.557433 6 log.go:172] (0xc000e3c580) (0xc0017ef680) Create stream I0313 13:33:12.557457 6 log.go:172] (0xc000e3c580) (0xc0017ef680) Stream added, broadcasting: 1 I0313 13:33:12.559906 6 log.go:172] (0xc000e3c580) Reply frame received for 1 I0313 13:33:12.559943 6 log.go:172] (0xc000e3c580) (0xc0002f3ea0) Create stream I0313 13:33:12.559955 6 log.go:172] (0xc000e3c580) (0xc0002f3ea0) Stream added, broadcasting: 3 I0313 13:33:12.561487 6 log.go:172] (0xc000e3c580) Reply frame received for 3 I0313 13:33:12.561519 6 log.go:172] (0xc000e3c580) (0xc00315e6e0) Create stream I0313 13:33:12.561531 6 log.go:172] (0xc000e3c580) (0xc00315e6e0) Stream added, broadcasting: 5 I0313 13:33:12.563496 6 log.go:172] (0xc000e3c580) Reply frame received for 5 I0313 13:33:12.635525 6 log.go:172] (0xc000e3c580) Data frame received for 5 I0313 13:33:12.635570 6 log.go:172] (0xc00315e6e0) (5) Data frame handling I0313 13:33:12.635600 6 log.go:172] (0xc000e3c580) Data frame received for 3 I0313 13:33:12.635616 6 log.go:172] (0xc0002f3ea0) (3) Data frame handling I0313 13:33:12.635631 6 log.go:172] (0xc0002f3ea0) (3) Data frame sent I0313 13:33:12.635646 6 log.go:172] (0xc000e3c580) Data frame received for 3 I0313 13:33:12.635654 6 log.go:172] (0xc0002f3ea0) (3) Data frame handling I0313 13:33:12.635802 6 log.go:172] (0xc000e3c580) Data frame received for 1 I0313 13:33:12.635825 6 log.go:172] (0xc0017ef680) (1) Data frame handling I0313 13:33:12.635838 6 log.go:172] (0xc0017ef680) (1) Data frame sent I0313 13:33:12.635850 6 log.go:172] (0xc000e3c580) (0xc0017ef680) Stream removed, broadcasting: 1 I0313 13:33:12.635870 6 log.go:172] (0xc000e3c580) Go away received I0313 13:33:12.635978 6 log.go:172] (0xc000e3c580) (0xc0017ef680) Stream removed, broadcasting: 1 I0313 13:33:12.635997 6 log.go:172] (0xc000e3c580) (0xc0002f3ea0) Stream removed, broadcasting: 3 I0313 13:33:12.636008 6 log.go:172] (0xc000e3c580) (0xc00315e6e0) Stream removed, broadcasting: 5 Mar 13 13:33:12.636: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:33:12.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3513" for this suite. Mar 13 13:33:34.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:33:34.705: INFO: namespace pod-network-test-3513 deletion completed in 22.065916093s • [SLOW TEST:40.422 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:33:34.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-b5794b2d-ea24-4910-92d7-52f535d599cc STEP: Creating a pod to test consume secrets Mar 13 13:33:34.777: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-34fe83f9-f903-47e8-8934-e3a6d2ec4d3e" in namespace "projected-6861" to be "success or failure" Mar 13 13:33:34.782: INFO: Pod "pod-projected-secrets-34fe83f9-f903-47e8-8934-e3a6d2ec4d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.851979ms Mar 13 13:33:36.786: INFO: Pod "pod-projected-secrets-34fe83f9-f903-47e8-8934-e3a6d2ec4d3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00842062s STEP: Saw pod success Mar 13 13:33:36.786: INFO: Pod "pod-projected-secrets-34fe83f9-f903-47e8-8934-e3a6d2ec4d3e" satisfied condition "success or failure" Mar 13 13:33:36.789: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-34fe83f9-f903-47e8-8934-e3a6d2ec4d3e container projected-secret-volume-test: STEP: delete the pod Mar 13 13:33:36.819: INFO: Waiting for pod pod-projected-secrets-34fe83f9-f903-47e8-8934-e3a6d2ec4d3e to disappear Mar 13 13:33:36.830: INFO: Pod pod-projected-secrets-34fe83f9-f903-47e8-8934-e3a6d2ec4d3e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:33:36.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6861" for this suite. Mar 13 13:33:42.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:33:42.914: INFO: namespace projected-6861 deletion completed in 6.080232375s • [SLOW TEST:8.208 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:33:42.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-2968 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2968 STEP: Deleting pre-stop pod Mar 13 13:33:52.010: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:33:52.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2968" for this suite. Mar 13 13:34:30.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:34:30.105: INFO: namespace prestop-2968 deletion completed in 38.085070388s • [SLOW TEST:47.191 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:34:30.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ee7e7b89-168d-41c3-851e-04f5a97cf8a4 STEP: Creating a pod to test consume secrets Mar 13 13:34:30.186: INFO: Waiting up to 5m0s for pod "pod-secrets-e13ec15b-742e-4045-9434-aad76327b70a" in namespace "secrets-9438" to be "success or failure" Mar 13 13:34:30.190: INFO: Pod "pod-secrets-e13ec15b-742e-4045-9434-aad76327b70a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.500982ms Mar 13 13:34:32.194: INFO: Pod "pod-secrets-e13ec15b-742e-4045-9434-aad76327b70a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007611968s STEP: Saw pod success Mar 13 13:34:32.194: INFO: Pod "pod-secrets-e13ec15b-742e-4045-9434-aad76327b70a" satisfied condition "success or failure" Mar 13 13:34:32.196: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-e13ec15b-742e-4045-9434-aad76327b70a container secret-volume-test: STEP: delete the pod Mar 13 13:34:32.222: INFO: Waiting for pod pod-secrets-e13ec15b-742e-4045-9434-aad76327b70a to disappear Mar 13 13:34:32.226: INFO: Pod pod-secrets-e13ec15b-742e-4045-9434-aad76327b70a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:34:32.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9438" for this suite. Mar 13 13:34:38.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:34:38.288: INFO: namespace secrets-9438 deletion completed in 6.058947221s • [SLOW TEST:8.183 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:34:38.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-ccdeeede-00c6-4696-b6aa-64a78890f443 STEP: Creating a pod to test consume secrets Mar 13 13:34:38.365: INFO: Waiting up to 5m0s for pod "pod-secrets-715f3f36-aa65-44e9-8931-56f37392f91c" in namespace "secrets-1270" to be "success or failure" Mar 13 13:34:38.370: INFO: Pod "pod-secrets-715f3f36-aa65-44e9-8931-56f37392f91c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.697614ms Mar 13 13:34:40.373: INFO: Pod "pod-secrets-715f3f36-aa65-44e9-8931-56f37392f91c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007127234s STEP: Saw pod success Mar 13 13:34:40.373: INFO: Pod "pod-secrets-715f3f36-aa65-44e9-8931-56f37392f91c" satisfied condition "success or failure" Mar 13 13:34:40.374: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-715f3f36-aa65-44e9-8931-56f37392f91c container secret-volume-test: STEP: delete the pod Mar 13 13:34:40.383: INFO: Waiting for pod pod-secrets-715f3f36-aa65-44e9-8931-56f37392f91c to disappear Mar 13 13:34:40.418: INFO: Pod pod-secrets-715f3f36-aa65-44e9-8931-56f37392f91c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:34:40.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1270" for this suite. Mar 13 13:34:46.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:34:46.499: INFO: namespace secrets-1270 deletion completed in 6.078873204s • [SLOW TEST:8.211 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:34:46.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 13 13:34:46.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-8167' Mar 13 13:34:48.202: INFO: stderr: "" Mar 13 13:34:48.202: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 13 13:34:53.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-8167 -o json' Mar 13 13:34:53.328: INFO: stderr: "" Mar 13 13:34:53.328: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-13T13:34:48Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-8167\",\n \"resourceVersion\": \"907559\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8167/pods/e2e-test-nginx-pod\",\n \"uid\": \"bb50ce9f-0e9e-4868-ba64-991dfda76a6f\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-sv8r5\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-sv8r5\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-sv8r5\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-13T13:34:48Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-13T13:34:49Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-13T13:34:49Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-13T13:34:48Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://a503e727b8a4881525732a904d3f0e6bd7ea9715194e4d7186d3f5aad7909222\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-13T13:34:49Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.7\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.91\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-13T13:34:48Z\"\n }\n}\n" STEP: replace the image in the pod Mar 13 13:34:53.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8167' Mar 13 13:34:53.537: INFO: stderr: "" Mar 13 13:34:53.537: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Mar 13 13:34:53.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8167' Mar 13 13:35:04.456: INFO: stderr: "" Mar 13 13:35:04.456: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:35:04.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8167" for this suite. Mar 13 13:35:10.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:35:10.568: INFO: namespace kubectl-8167 deletion completed in 6.093045682s • [SLOW TEST:24.069 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:35:10.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 13:35:10.640: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 13 13:35:10.663: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:10.665: INFO: Number of nodes with available pods: 0 Mar 13 13:35:10.665: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:35:11.669: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:11.672: INFO: Number of nodes with available pods: 0 Mar 13 13:35:11.672: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:35:12.669: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:12.672: INFO: Number of nodes with available pods: 1 Mar 13 13:35:12.672: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:35:13.670: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:13.672: INFO: Number of nodes with available pods: 2 Mar 13 13:35:13.672: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 13 13:35:13.701: INFO: Wrong image for pod: daemon-set-ht9f9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:13.701: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:13.732: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:14.735: INFO: Wrong image for pod: daemon-set-ht9f9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:14.735: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:14.738: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:15.736: INFO: Wrong image for pod: daemon-set-ht9f9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:15.736: INFO: Pod daemon-set-ht9f9 is not available Mar 13 13:35:15.736: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:15.740: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:16.734: INFO: Wrong image for pod: daemon-set-ht9f9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:16.734: INFO: Pod daemon-set-ht9f9 is not available Mar 13 13:35:16.734: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:16.736: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:17.736: INFO: Wrong image for pod: daemon-set-ht9f9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:17.736: INFO: Pod daemon-set-ht9f9 is not available Mar 13 13:35:17.736: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:17.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:18.736: INFO: Wrong image for pod: daemon-set-ht9f9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:18.736: INFO: Pod daemon-set-ht9f9 is not available Mar 13 13:35:18.736: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:18.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:19.736: INFO: Wrong image for pod: daemon-set-ht9f9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:19.736: INFO: Pod daemon-set-ht9f9 is not available Mar 13 13:35:19.736: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:19.740: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:20.736: INFO: Wrong image for pod: daemon-set-ht9f9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:20.736: INFO: Pod daemon-set-ht9f9 is not available Mar 13 13:35:20.736: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:20.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:21.734: INFO: Wrong image for pod: daemon-set-ht9f9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:21.734: INFO: Pod daemon-set-ht9f9 is not available Mar 13 13:35:21.734: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:21.736: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:22.736: INFO: Wrong image for pod: daemon-set-ht9f9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:22.736: INFO: Pod daemon-set-ht9f9 is not available Mar 13 13:35:22.736: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:22.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:23.736: INFO: Wrong image for pod: daemon-set-ht9f9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:23.736: INFO: Pod daemon-set-ht9f9 is not available Mar 13 13:35:23.736: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:23.740: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:24.735: INFO: Pod daemon-set-87mj4 is not available Mar 13 13:35:24.735: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:24.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:25.736: INFO: Pod daemon-set-87mj4 is not available Mar 13 13:35:25.736: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:25.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:26.735: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:26.737: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:27.736: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:27.736: INFO: Pod daemon-set-z2rk2 is not available Mar 13 13:35:27.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:28.735: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:28.735: INFO: Pod daemon-set-z2rk2 is not available Mar 13 13:35:28.738: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:29.735: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:29.735: INFO: Pod daemon-set-z2rk2 is not available Mar 13 13:35:29.738: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:30.735: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:30.735: INFO: Pod daemon-set-z2rk2 is not available Mar 13 13:35:30.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:31.736: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:31.736: INFO: Pod daemon-set-z2rk2 is not available Mar 13 13:35:31.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:32.735: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:32.735: INFO: Pod daemon-set-z2rk2 is not available Mar 13 13:35:32.738: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:33.736: INFO: Wrong image for pod: daemon-set-z2rk2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 13 13:35:33.736: INFO: Pod daemon-set-z2rk2 is not available Mar 13 13:35:33.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:34.735: INFO: Pod daemon-set-845ct is not available Mar 13 13:35:34.738: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 13 13:35:34.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:34.744: INFO: Number of nodes with available pods: 1 Mar 13 13:35:34.744: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:35:35.749: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:35.752: INFO: Number of nodes with available pods: 1 Mar 13 13:35:35.752: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:35:36.752: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:35:36.760: INFO: Number of nodes with available pods: 2 Mar 13 13:35:36.760: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6163, will wait for the garbage collector to delete the pods Mar 13 13:35:36.831: INFO: Deleting DaemonSet.extensions daemon-set took: 5.956481ms Mar 13 13:35:37.131: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.231031ms Mar 13 13:35:44.334: INFO: Number of nodes with available pods: 0 Mar 13 13:35:44.334: INFO: Number of running nodes: 0, number of available pods: 0 Mar 13 13:35:44.335: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6163/daemonsets","resourceVersion":"907769"},"items":null} Mar 13 13:35:44.337: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6163/pods","resourceVersion":"907769"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:35:44.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6163" for this suite. Mar 13 13:35:50.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:35:50.432: INFO: namespace daemonsets-6163 deletion completed in 6.086415856s • [SLOW TEST:39.864 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:35:50.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 13 13:35:50.492: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:35:54.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9388" for this suite. Mar 13 13:36:16.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:36:16.942: INFO: namespace init-container-9388 deletion completed in 22.094941119s • [SLOW TEST:26.510 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:36:16.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:37:16.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4509" for this suite. Mar 13 13:37:39.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:37:39.087: INFO: namespace container-probe-4509 deletion completed in 22.093861795s • [SLOW TEST:82.145 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:37:39.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-f452fcf4-8e9d-4179-a5d0-e4126f2a3428 STEP: Creating a pod to test consume configMaps Mar 13 13:37:39.179: INFO: Waiting up to 5m0s for pod "pod-configmaps-d57d7a1d-8ace-4fca-8684-8e7a34ba9253" in namespace "configmap-7281" to be "success or failure" Mar 13 13:37:39.189: INFO: Pod "pod-configmaps-d57d7a1d-8ace-4fca-8684-8e7a34ba9253": Phase="Pending", Reason="", readiness=false. Elapsed: 9.93043ms Mar 13 13:37:41.192: INFO: Pod "pod-configmaps-d57d7a1d-8ace-4fca-8684-8e7a34ba9253": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013628583s STEP: Saw pod success Mar 13 13:37:41.193: INFO: Pod "pod-configmaps-d57d7a1d-8ace-4fca-8684-8e7a34ba9253" satisfied condition "success or failure" Mar 13 13:37:41.195: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d57d7a1d-8ace-4fca-8684-8e7a34ba9253 container configmap-volume-test: STEP: delete the pod Mar 13 13:37:41.229: INFO: Waiting for pod pod-configmaps-d57d7a1d-8ace-4fca-8684-8e7a34ba9253 to disappear Mar 13 13:37:41.231: INFO: Pod pod-configmaps-d57d7a1d-8ace-4fca-8684-8e7a34ba9253 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:37:41.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7281" for this suite. Mar 13 13:37:47.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:37:47.326: INFO: namespace configmap-7281 deletion completed in 6.092726961s • [SLOW TEST:8.239 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:37:47.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Mar 13 13:37:47.459: INFO: Waiting up to 5m0s for pod "client-containers-5bd59bcb-c2cf-4c6c-a2e2-0552292a8db8" in namespace "containers-5271" to be "success or failure" Mar 13 13:37:47.477: INFO: Pod "client-containers-5bd59bcb-c2cf-4c6c-a2e2-0552292a8db8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.598323ms Mar 13 13:37:49.480: INFO: Pod "client-containers-5bd59bcb-c2cf-4c6c-a2e2-0552292a8db8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020810283s STEP: Saw pod success Mar 13 13:37:49.480: INFO: Pod "client-containers-5bd59bcb-c2cf-4c6c-a2e2-0552292a8db8" satisfied condition "success or failure" Mar 13 13:37:49.482: INFO: Trying to get logs from node iruya-worker2 pod client-containers-5bd59bcb-c2cf-4c6c-a2e2-0552292a8db8 container test-container: STEP: delete the pod Mar 13 13:37:49.501: INFO: Waiting for pod client-containers-5bd59bcb-c2cf-4c6c-a2e2-0552292a8db8 to disappear Mar 13 13:37:49.518: INFO: Pod client-containers-5bd59bcb-c2cf-4c6c-a2e2-0552292a8db8 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:37:49.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5271" for this suite. Mar 13 13:37:55.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:37:55.624: INFO: namespace containers-5271 deletion completed in 6.102838513s • [SLOW TEST:8.298 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:37:55.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:38:01.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4735" for this suite. Mar 13 13:38:07.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:38:07.304: INFO: namespace watch-4735 deletion completed in 6.174524731s • [SLOW TEST:11.680 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:38:07.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0313 13:38:13.413747 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 13 13:38:13.413: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:38:13.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5112" for this suite. Mar 13 13:38:19.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:38:19.484: INFO: namespace gc-5112 deletion completed in 6.063679934s • [SLOW TEST:12.179 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:38:19.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-06dd16ee-c38d-476b-9c61-56f2caf36799 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-06dd16ee-c38d-476b-9c61-56f2caf36799 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:38:23.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9053" for this suite. Mar 13 13:38:45.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:38:45.735: INFO: namespace projected-9053 deletion completed in 22.140019018s • [SLOW TEST:26.251 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:38:45.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Mar 13 13:38:45.792: INFO: Waiting up to 5m0s for pod "client-containers-8d1ea8f3-2804-4702-9de8-31d17ba92653" in namespace "containers-8195" to be "success or failure" Mar 13 13:38:45.811: INFO: Pod "client-containers-8d1ea8f3-2804-4702-9de8-31d17ba92653": Phase="Pending", Reason="", readiness=false. Elapsed: 18.450411ms Mar 13 13:38:47.814: INFO: Pod "client-containers-8d1ea8f3-2804-4702-9de8-31d17ba92653": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022244135s STEP: Saw pod success Mar 13 13:38:47.815: INFO: Pod "client-containers-8d1ea8f3-2804-4702-9de8-31d17ba92653" satisfied condition "success or failure" Mar 13 13:38:47.817: INFO: Trying to get logs from node iruya-worker2 pod client-containers-8d1ea8f3-2804-4702-9de8-31d17ba92653 container test-container: STEP: delete the pod Mar 13 13:38:47.834: INFO: Waiting for pod client-containers-8d1ea8f3-2804-4702-9de8-31d17ba92653 to disappear Mar 13 13:38:47.838: INFO: Pod client-containers-8d1ea8f3-2804-4702-9de8-31d17ba92653 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:38:47.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8195" for this suite. Mar 13 13:38:53.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:38:53.982: INFO: namespace containers-8195 deletion completed in 6.140298826s • [SLOW TEST:8.246 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:38:53.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 13 13:39:02.118: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 13:39:02.121: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 13:39:04.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 13:39:04.123: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 13:39:06.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 13:39:06.124: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 13:39:08.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 13:39:08.125: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 13:39:10.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 13:39:10.124: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 13:39:12.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 13:39:12.124: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 13:39:14.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 13:39:14.124: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 13:39:16.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 13:39:16.123: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 13:39:18.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 13:39:18.124: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 13:39:20.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 13:39:20.124: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 13:39:22.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 13:39:22.124: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 13:39:24.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 13:39:24.124: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 13:39:26.121: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 13:39:26.124: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:39:26.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3871" for this suite. Mar 13 13:39:48.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:39:48.217: INFO: namespace container-lifecycle-hook-3871 deletion completed in 22.083817561s • [SLOW TEST:54.235 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:39:48.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-5790/configmap-test-c0b2be68-5bd9-4443-a9cd-83388921a8a3 STEP: Creating a pod to test consume configMaps Mar 13 13:39:48.295: INFO: Waiting up to 5m0s for pod "pod-configmaps-318b8a7d-c4a1-4a41-a130-de4f1bda5345" in namespace "configmap-5790" to be "success or failure" Mar 13 13:39:48.315: INFO: Pod "pod-configmaps-318b8a7d-c4a1-4a41-a130-de4f1bda5345": Phase="Pending", Reason="", readiness=false. Elapsed: 20.449256ms Mar 13 13:39:50.318: INFO: Pod "pod-configmaps-318b8a7d-c4a1-4a41-a130-de4f1bda5345": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023504939s STEP: Saw pod success Mar 13 13:39:50.318: INFO: Pod "pod-configmaps-318b8a7d-c4a1-4a41-a130-de4f1bda5345" satisfied condition "success or failure" Mar 13 13:39:50.320: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-318b8a7d-c4a1-4a41-a130-de4f1bda5345 container env-test: STEP: delete the pod Mar 13 13:39:50.364: INFO: Waiting for pod pod-configmaps-318b8a7d-c4a1-4a41-a130-de4f1bda5345 to disappear Mar 13 13:39:50.372: INFO: Pod pod-configmaps-318b8a7d-c4a1-4a41-a130-de4f1bda5345 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:39:50.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5790" for this suite. Mar 13 13:39:56.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:39:56.457: INFO: namespace configmap-5790 deletion completed in 6.081587495s • [SLOW TEST:8.240 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:39:56.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 13 13:39:56.499: INFO: Waiting up to 5m0s for pod "downward-api-f0984d49-e280-4311-a25f-dd9b17689005" in namespace "downward-api-8042" to be "success or failure" Mar 13 13:39:56.524: INFO: Pod "downward-api-f0984d49-e280-4311-a25f-dd9b17689005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.482888ms Mar 13 13:39:58.528: INFO: Pod "downward-api-f0984d49-e280-4311-a25f-dd9b17689005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.028880095s STEP: Saw pod success Mar 13 13:39:58.528: INFO: Pod "downward-api-f0984d49-e280-4311-a25f-dd9b17689005" satisfied condition "success or failure" Mar 13 13:39:58.530: INFO: Trying to get logs from node iruya-worker2 pod downward-api-f0984d49-e280-4311-a25f-dd9b17689005 container dapi-container: STEP: delete the pod Mar 13 13:39:58.564: INFO: Waiting for pod downward-api-f0984d49-e280-4311-a25f-dd9b17689005 to disappear Mar 13 13:39:58.569: INFO: Pod downward-api-f0984d49-e280-4311-a25f-dd9b17689005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:39:58.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8042" for this suite. Mar 13 13:40:04.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:40:04.666: INFO: namespace downward-api-8042 deletion completed in 6.093146972s • [SLOW TEST:8.208 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:40:04.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0313 13:40:05.795022 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 13 13:40:05.795: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:40:05.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8092" for this suite. Mar 13 13:40:11.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:40:11.888: INFO: namespace gc-8092 deletion completed in 6.090724097s • [SLOW TEST:7.221 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:40:11.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Mar 13 13:40:11.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 13 13:40:12.082: INFO: stderr: "" Mar 13 13:40:12.082: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:40:12.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4488" for this suite. Mar 13 13:40:18.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:40:18.164: INFO: namespace kubectl-4488 deletion completed in 6.079160336s • [SLOW TEST:6.276 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:40:18.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-bead4e05-76e8-4bd1-a7eb-0730abf400a8 STEP: Creating secret with name s-test-opt-upd-c704d0de-ca87-4d93-afc6-be91deecbe9a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-bead4e05-76e8-4bd1-a7eb-0730abf400a8 STEP: Updating secret s-test-opt-upd-c704d0de-ca87-4d93-afc6-be91deecbe9a STEP: Creating secret with name s-test-opt-create-24a1b81c-4eb7-4c3b-85ce-8fdcbbef0346 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:40:26.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2001" for this suite. Mar 13 13:40:48.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:40:48.441: INFO: namespace secrets-2001 deletion completed in 22.078459372s • [SLOW TEST:30.276 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:40:48.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Mar 13 13:40:48.997: INFO: created pod pod-service-account-defaultsa Mar 13 13:40:48.997: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 13 13:40:49.003: INFO: created pod pod-service-account-mountsa Mar 13 13:40:49.003: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 13 13:40:49.008: INFO: created pod pod-service-account-nomountsa Mar 13 13:40:49.008: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 13 13:40:49.033: INFO: created pod pod-service-account-defaultsa-mountspec Mar 13 13:40:49.033: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 13 13:40:49.048: INFO: created pod pod-service-account-mountsa-mountspec Mar 13 13:40:49.048: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 13 13:40:49.061: INFO: created pod pod-service-account-nomountsa-mountspec Mar 13 13:40:49.061: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 13 13:40:49.094: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 13 13:40:49.094: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 13 13:40:49.115: INFO: created pod pod-service-account-mountsa-nomountspec Mar 13 13:40:49.115: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 13 13:40:49.133: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 13 13:40:49.133: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:40:49.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7058" for this suite. Mar 13 13:40:55.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:40:55.315: INFO: namespace svcaccounts-7058 deletion completed in 6.144767496s • [SLOW TEST:6.873 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:40:55.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Mar 13 13:40:57.970: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6102 pod-service-account-1e675187-153e-4a6f-921a-e7e383c30dc4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 13 13:40:58.193: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6102 pod-service-account-1e675187-153e-4a6f-921a-e7e383c30dc4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 13 13:40:58.367: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6102 pod-service-account-1e675187-153e-4a6f-921a-e7e383c30dc4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:40:58.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6102" for this suite. Mar 13 13:41:04.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:41:04.641: INFO: namespace svcaccounts-6102 deletion completed in 6.092122637s • [SLOW TEST:9.326 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:41:04.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 13 13:41:07.219: INFO: Successfully updated pod "annotationupdate2aa534e7-6920-41e9-b811-581d6f1ec47d" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:41:09.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5345" for this suite. Mar 13 13:41:31.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:41:31.347: INFO: namespace downward-api-5345 deletion completed in 22.089759183s • [SLOW TEST:26.706 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:41:31.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-c60147d3-f58b-4c16-a55b-8f33b9838b6d STEP: Creating a pod to test consume configMaps Mar 13 13:41:31.399: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dce533dd-8998-480b-85d4-489b71b4c3da" in namespace "projected-236" to be "success or failure" Mar 13 13:41:31.418: INFO: Pod "pod-projected-configmaps-dce533dd-8998-480b-85d4-489b71b4c3da": Phase="Pending", Reason="", readiness=false. Elapsed: 19.336019ms Mar 13 13:41:33.421: INFO: Pod "pod-projected-configmaps-dce533dd-8998-480b-85d4-489b71b4c3da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021945538s STEP: Saw pod success Mar 13 13:41:33.421: INFO: Pod "pod-projected-configmaps-dce533dd-8998-480b-85d4-489b71b4c3da" satisfied condition "success or failure" Mar 13 13:41:33.423: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-dce533dd-8998-480b-85d4-489b71b4c3da container projected-configmap-volume-test: STEP: delete the pod Mar 13 13:41:33.440: INFO: Waiting for pod pod-projected-configmaps-dce533dd-8998-480b-85d4-489b71b4c3da to disappear Mar 13 13:41:33.478: INFO: Pod pod-projected-configmaps-dce533dd-8998-480b-85d4-489b71b4c3da no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:41:33.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-236" for this suite. Mar 13 13:41:39.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:41:39.601: INFO: namespace projected-236 deletion completed in 6.12015897s • [SLOW TEST:8.254 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:41:39.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:42:03.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8031" for this suite. Mar 13 13:42:09.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:42:09.927: INFO: namespace namespaces-8031 deletion completed in 6.112090947s STEP: Destroying namespace "nsdeletetest-2339" for this suite. Mar 13 13:42:09.928: INFO: Namespace nsdeletetest-2339 was already deleted STEP: Destroying namespace "nsdeletetest-5821" for this suite. Mar 13 13:42:15.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:42:16.036: INFO: namespace nsdeletetest-5821 deletion completed in 6.107766238s • [SLOW TEST:36.433 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:42:16.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 13 13:42:16.120: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:42:16.122: INFO: Number of nodes with available pods: 0 Mar 13 13:42:16.122: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:42:17.149: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:42:17.152: INFO: Number of nodes with available pods: 0 Mar 13 13:42:17.152: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:42:18.126: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:42:18.128: INFO: Number of nodes with available pods: 1 Mar 13 13:42:18.128: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:42:19.126: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:42:19.128: INFO: Number of nodes with available pods: 2 Mar 13 13:42:19.128: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 13 13:42:19.156: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:42:19.158: INFO: Number of nodes with available pods: 1 Mar 13 13:42:19.158: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:42:20.163: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:42:20.166: INFO: Number of nodes with available pods: 1 Mar 13 13:42:20.166: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:42:21.162: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:42:21.165: INFO: Number of nodes with available pods: 1 Mar 13 13:42:21.165: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:42:22.161: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:42:22.164: INFO: Number of nodes with available pods: 1 Mar 13 13:42:22.164: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:42:23.162: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:42:23.166: INFO: Number of nodes with available pods: 1 Mar 13 13:42:23.166: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:42:24.163: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:42:24.165: INFO: Number of nodes with available pods: 1 Mar 13 13:42:24.165: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:42:25.163: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:42:25.166: INFO: Number of nodes with available pods: 2 Mar 13 13:42:25.166: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6051, will wait for the garbage collector to delete the pods Mar 13 13:42:25.227: INFO: Deleting DaemonSet.extensions daemon-set took: 5.643118ms Mar 13 13:42:25.527: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.236215ms Mar 13 13:42:34.544: INFO: Number of nodes with available pods: 0 Mar 13 13:42:34.544: INFO: Number of running nodes: 0, number of available pods: 0 Mar 13 13:42:34.545: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6051/daemonsets","resourceVersion":"909556"},"items":null} Mar 13 13:42:34.546: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6051/pods","resourceVersion":"909556"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:42:34.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6051" for this suite. Mar 13 13:42:40.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:42:40.626: INFO: namespace daemonsets-6051 deletion completed in 6.073585281s • [SLOW TEST:24.590 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:42:40.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 13:42:40.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 13 13:42:40.792: INFO: stderr: "" Mar 13 13:42:40.792: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.10\", GitCommit:\"1bea6c00a7055edef03f1d4bb58b773fa8917f11\", GitTreeState:\"clean\", BuildDate:\"2020-03-09T11:07:06Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:42:40.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7459" for this suite. Mar 13 13:42:46.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:42:46.883: INFO: namespace kubectl-7459 deletion completed in 6.08730946s • [SLOW TEST:6.256 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:42:46.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-dd79be6b-7308-4441-b936-be26e99e8a2d STEP: Creating a pod to test consume secrets Mar 13 13:42:46.960: INFO: Waiting up to 5m0s for pod "pod-secrets-0cac503a-2c94-4e89-a85d-dee2ef90ca07" in namespace "secrets-7584" to be "success or failure" Mar 13 13:42:46.979: INFO: Pod "pod-secrets-0cac503a-2c94-4e89-a85d-dee2ef90ca07": Phase="Pending", Reason="", readiness=false. Elapsed: 19.859737ms Mar 13 13:42:48.983: INFO: Pod "pod-secrets-0cac503a-2c94-4e89-a85d-dee2ef90ca07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023664572s STEP: Saw pod success Mar 13 13:42:48.983: INFO: Pod "pod-secrets-0cac503a-2c94-4e89-a85d-dee2ef90ca07" satisfied condition "success or failure" Mar 13 13:42:48.990: INFO: Trying to get logs from node iruya-worker pod pod-secrets-0cac503a-2c94-4e89-a85d-dee2ef90ca07 container secret-volume-test: STEP: delete the pod Mar 13 13:42:49.011: INFO: Waiting for pod pod-secrets-0cac503a-2c94-4e89-a85d-dee2ef90ca07 to disappear Mar 13 13:42:49.026: INFO: Pod pod-secrets-0cac503a-2c94-4e89-a85d-dee2ef90ca07 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:42:49.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7584" for this suite. Mar 13 13:42:55.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:42:55.117: INFO: namespace secrets-7584 deletion completed in 6.088363105s • [SLOW TEST:8.233 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:42:55.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0313 13:43:35.202348 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 13 13:43:35.202: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:43:35.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1790" for this suite. Mar 13 13:43:41.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:43:41.296: INFO: namespace gc-1790 deletion completed in 6.090100999s • [SLOW TEST:46.179 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:43:41.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4986 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 13 13:43:41.362: INFO: Found 0 stateful pods, waiting for 3 Mar 13 13:43:51.365: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 13 13:43:51.365: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 13 13:43:51.365: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 13 13:43:51.384: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 13 13:44:01.430: INFO: Updating stateful set ss2 Mar 13 13:44:01.469: INFO: Waiting for Pod statefulset-4986/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 13 13:44:11.567: INFO: Found 2 stateful pods, waiting for 3 Mar 13 13:44:21.571: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 13 13:44:21.571: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 13 13:44:21.571: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 13 13:44:21.591: INFO: Updating stateful set ss2 Mar 13 13:44:21.644: INFO: Waiting for Pod statefulset-4986/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 13 13:44:31.651: INFO: Waiting for Pod statefulset-4986/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 13 13:44:41.666: INFO: Updating stateful set ss2 Mar 13 13:44:41.692: INFO: Waiting for StatefulSet statefulset-4986/ss2 to complete update Mar 13 13:44:41.692: INFO: Waiting for Pod statefulset-4986/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 13 13:44:51.699: INFO: Waiting for StatefulSet statefulset-4986/ss2 to complete update Mar 13 13:44:51.700: INFO: Waiting for Pod statefulset-4986/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 13 13:45:01.700: INFO: Deleting all statefulset in ns statefulset-4986 Mar 13 13:45:01.703: INFO: Scaling statefulset ss2 to 0 Mar 13 13:45:41.716: INFO: Waiting for statefulset status.replicas updated to 0 Mar 13 13:45:41.719: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:45:41.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4986" for this suite. Mar 13 13:45:47.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:45:47.815: INFO: namespace statefulset-4986 deletion completed in 6.079083023s • [SLOW TEST:126.519 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:45:47.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 13 13:45:49.892: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 13 13:46:05.011: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:46:05.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6359" for this suite. Mar 13 13:46:11.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:46:11.100: INFO: namespace pods-6359 deletion completed in 6.083832886s • [SLOW TEST:23.284 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:46:11.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 13:46:11.172: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02f856b5-9792-4f89-a9f6-fd998823803c" in namespace "downward-api-444" to be "success or failure" Mar 13 13:46:11.179: INFO: Pod "downwardapi-volume-02f856b5-9792-4f89-a9f6-fd998823803c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.816509ms Mar 13 13:46:13.182: INFO: Pod "downwardapi-volume-02f856b5-9792-4f89-a9f6-fd998823803c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009363937s STEP: Saw pod success Mar 13 13:46:13.182: INFO: Pod "downwardapi-volume-02f856b5-9792-4f89-a9f6-fd998823803c" satisfied condition "success or failure" Mar 13 13:46:13.183: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-02f856b5-9792-4f89-a9f6-fd998823803c container client-container: STEP: delete the pod Mar 13 13:46:13.198: INFO: Waiting for pod downwardapi-volume-02f856b5-9792-4f89-a9f6-fd998823803c to disappear Mar 13 13:46:13.203: INFO: Pod downwardapi-volume-02f856b5-9792-4f89-a9f6-fd998823803c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:46:13.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-444" for this suite. Mar 13 13:46:19.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:46:19.255: INFO: namespace downward-api-444 deletion completed in 6.050561328s • [SLOW TEST:8.155 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:46:19.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-173563f6-8a38-4f62-bb56-1a3d0a544635 STEP: Creating configMap with name cm-test-opt-upd-cde38b09-dd23-4baa-9687-0c480f5f3b0f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-173563f6-8a38-4f62-bb56-1a3d0a544635 STEP: Updating configmap cm-test-opt-upd-cde38b09-dd23-4baa-9687-0c480f5f3b0f STEP: Creating configMap with name cm-test-opt-create-e424fd5b-bb84-4d93-99fe-1ce355793898 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:47:27.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9902" for this suite. Mar 13 13:47:49.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:47:49.919: INFO: namespace configmap-9902 deletion completed in 22.13696636s • [SLOW TEST:90.664 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:47:49.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 13 13:47:52.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-f74c5c7b-6b0a-4f47-a119-8453a965dccd -c busybox-main-container --namespace=emptydir-1895 -- cat /usr/share/volumeshare/shareddata.txt' Mar 13 13:47:54.422: INFO: stderr: "I0313 13:47:54.350381 1027 log.go:172] (0xc000956420) (0xc00010a8c0) Create stream\nI0313 13:47:54.350414 1027 log.go:172] (0xc000956420) (0xc00010a8c0) Stream added, broadcasting: 1\nI0313 13:47:54.352003 1027 log.go:172] (0xc000956420) Reply frame received for 1\nI0313 13:47:54.352028 1027 log.go:172] (0xc000956420) (0xc000a0c000) Create stream\nI0313 13:47:54.352037 1027 log.go:172] (0xc000956420) (0xc000a0c000) Stream added, broadcasting: 3\nI0313 13:47:54.352627 1027 log.go:172] (0xc000956420) Reply frame received for 3\nI0313 13:47:54.352649 1027 log.go:172] (0xc000956420) (0xc0006263c0) Create stream\nI0313 13:47:54.352658 1027 log.go:172] (0xc000956420) (0xc0006263c0) Stream added, broadcasting: 5\nI0313 13:47:54.353214 1027 log.go:172] (0xc000956420) Reply frame received for 5\nI0313 13:47:54.416653 1027 log.go:172] (0xc000956420) Data frame received for 3\nI0313 13:47:54.416683 1027 log.go:172] (0xc000956420) Data frame received for 5\nI0313 13:47:54.416709 1027 log.go:172] (0xc0006263c0) (5) Data frame handling\nI0313 13:47:54.416776 1027 log.go:172] (0xc000a0c000) (3) Data frame handling\nI0313 13:47:54.416829 1027 log.go:172] (0xc000a0c000) (3) Data frame sent\nI0313 13:47:54.416842 1027 log.go:172] (0xc000956420) Data frame received for 3\nI0313 13:47:54.416852 1027 log.go:172] (0xc000a0c000) (3) Data frame handling\nI0313 13:47:54.417804 1027 log.go:172] (0xc000956420) Data frame received for 1\nI0313 13:47:54.417833 1027 log.go:172] (0xc00010a8c0) (1) Data frame handling\nI0313 13:47:54.417861 1027 log.go:172] (0xc00010a8c0) (1) Data frame sent\nI0313 13:47:54.417890 1027 log.go:172] (0xc000956420) (0xc00010a8c0) Stream removed, broadcasting: 1\nI0313 13:47:54.417914 1027 log.go:172] (0xc000956420) Go away received\nI0313 13:47:54.418482 1027 log.go:172] (0xc000956420) (0xc00010a8c0) Stream removed, broadcasting: 1\nI0313 13:47:54.418500 1027 log.go:172] (0xc000956420) (0xc000a0c000) Stream removed, broadcasting: 3\nI0313 13:47:54.418511 1027 log.go:172] (0xc000956420) (0xc0006263c0) Stream removed, broadcasting: 5\n" Mar 13 13:47:54.422: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:47:54.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1895" for this suite. Mar 13 13:48:00.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:48:00.518: INFO: namespace emptydir-1895 deletion completed in 6.091559734s • [SLOW TEST:10.598 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:48:00.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-c972cd2f-4f35-4589-acc7-c3594c17136b STEP: Creating a pod to test consume secrets Mar 13 13:48:00.585: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9db8e7a9-17f6-4586-83b1-987e7c297ad9" in namespace "projected-4363" to be "success or failure" Mar 13 13:48:00.607: INFO: Pod "pod-projected-secrets-9db8e7a9-17f6-4586-83b1-987e7c297ad9": Phase="Pending", Reason="", readiness=false. Elapsed: 21.921915ms Mar 13 13:48:02.611: INFO: Pod "pod-projected-secrets-9db8e7a9-17f6-4586-83b1-987e7c297ad9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025755324s STEP: Saw pod success Mar 13 13:48:02.611: INFO: Pod "pod-projected-secrets-9db8e7a9-17f6-4586-83b1-987e7c297ad9" satisfied condition "success or failure" Mar 13 13:48:02.614: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-9db8e7a9-17f6-4586-83b1-987e7c297ad9 container secret-volume-test: STEP: delete the pod Mar 13 13:48:02.675: INFO: Waiting for pod pod-projected-secrets-9db8e7a9-17f6-4586-83b1-987e7c297ad9 to disappear Mar 13 13:48:02.678: INFO: Pod pod-projected-secrets-9db8e7a9-17f6-4586-83b1-987e7c297ad9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:48:02.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4363" for this suite. Mar 13 13:48:08.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:48:08.773: INFO: namespace projected-4363 deletion completed in 6.090773497s • [SLOW TEST:8.255 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:48:08.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Mar 13 13:48:08.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8886' Mar 13 13:48:09.084: INFO: stderr: "" Mar 13 13:48:09.084: INFO: stdout: "pod/pause created\n" Mar 13 13:48:09.084: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 13 13:48:09.084: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8886" to be "running and ready" Mar 13 13:48:09.105: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 21.068244ms Mar 13 13:48:11.109: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.024762527s Mar 13 13:48:11.109: INFO: Pod "pause" satisfied condition "running and ready" Mar 13 13:48:11.109: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Mar 13 13:48:11.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8886' Mar 13 13:48:11.222: INFO: stderr: "" Mar 13 13:48:11.222: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 13 13:48:11.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8886' Mar 13 13:48:11.368: INFO: stderr: "" Mar 13 13:48:11.368: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 13 13:48:11.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8886' Mar 13 13:48:11.460: INFO: stderr: "" Mar 13 13:48:11.460: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 13 13:48:11.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8886' Mar 13 13:48:11.528: INFO: stderr: "" Mar 13 13:48:11.528: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Mar 13 13:48:11.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8886' Mar 13 13:48:11.653: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 13 13:48:11.653: INFO: stdout: "pod \"pause\" force deleted\n" Mar 13 13:48:11.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8886' Mar 13 13:48:11.748: INFO: stderr: "No resources found.\n" Mar 13 13:48:11.748: INFO: stdout: "" Mar 13 13:48:11.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8886 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 13 13:48:11.809: INFO: stderr: "" Mar 13 13:48:11.809: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:48:11.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8886" for this suite. Mar 13 13:48:17.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:48:17.904: INFO: namespace kubectl-8886 deletion completed in 6.092224809s • [SLOW TEST:9.130 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:48:17.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-5d33f571-90f7-4cc4-8248-ae35949fa1d5 STEP: Creating configMap with name cm-test-opt-upd-781b38ad-9bf5-496a-8791-e09045fbc126 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5d33f571-90f7-4cc4-8248-ae35949fa1d5 STEP: Updating configmap cm-test-opt-upd-781b38ad-9bf5-496a-8791-e09045fbc126 STEP: Creating configMap with name cm-test-opt-create-0e7adc17-f13d-4fdf-b3ca-580864ce2ef9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:49:54.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7926" for this suite. Mar 13 13:50:16.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:50:16.573: INFO: namespace projected-7926 deletion completed in 22.082635737s • [SLOW TEST:118.669 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:50:16.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 13 13:50:16.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3172' Mar 13 13:50:16.938: INFO: stderr: "" Mar 13 13:50:16.938: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 13 13:50:16.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3172' Mar 13 13:50:17.033: INFO: stderr: "" Mar 13 13:50:17.033: INFO: stdout: "update-demo-nautilus-jb5qs update-demo-nautilus-khd5j " Mar 13 13:50:17.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jb5qs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3172' Mar 13 13:50:17.110: INFO: stderr: "" Mar 13 13:50:17.110: INFO: stdout: "" Mar 13 13:50:17.110: INFO: update-demo-nautilus-jb5qs is created but not running Mar 13 13:50:22.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3172' Mar 13 13:50:22.221: INFO: stderr: "" Mar 13 13:50:22.221: INFO: stdout: "update-demo-nautilus-jb5qs update-demo-nautilus-khd5j " Mar 13 13:50:22.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jb5qs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3172' Mar 13 13:50:22.302: INFO: stderr: "" Mar 13 13:50:22.303: INFO: stdout: "true" Mar 13 13:50:22.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jb5qs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3172' Mar 13 13:50:22.386: INFO: stderr: "" Mar 13 13:50:22.386: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 13 13:50:22.386: INFO: validating pod update-demo-nautilus-jb5qs Mar 13 13:50:22.389: INFO: got data: { "image": "nautilus.jpg" } Mar 13 13:50:22.389: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 13 13:50:22.389: INFO: update-demo-nautilus-jb5qs is verified up and running Mar 13 13:50:22.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-khd5j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3172' Mar 13 13:50:22.457: INFO: stderr: "" Mar 13 13:50:22.457: INFO: stdout: "true" Mar 13 13:50:22.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-khd5j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3172' Mar 13 13:50:22.520: INFO: stderr: "" Mar 13 13:50:22.520: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 13 13:50:22.520: INFO: validating pod update-demo-nautilus-khd5j Mar 13 13:50:22.523: INFO: got data: { "image": "nautilus.jpg" } Mar 13 13:50:22.523: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 13 13:50:22.523: INFO: update-demo-nautilus-khd5j is verified up and running STEP: using delete to clean up resources Mar 13 13:50:22.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3172' Mar 13 13:50:22.600: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 13 13:50:22.600: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 13 13:50:22.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3172' Mar 13 13:50:22.668: INFO: stderr: "No resources found.\n" Mar 13 13:50:22.668: INFO: stdout: "" Mar 13 13:50:22.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3172 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 13 13:50:22.729: INFO: stderr: "" Mar 13 13:50:22.729: INFO: stdout: "update-demo-nautilus-jb5qs\nupdate-demo-nautilus-khd5j\n" Mar 13 13:50:23.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3172' Mar 13 13:50:23.309: INFO: stderr: "No resources found.\n" Mar 13 13:50:23.309: INFO: stdout: "" Mar 13 13:50:23.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3172 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 13 13:50:23.373: INFO: stderr: "" Mar 13 13:50:23.373: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:50:23.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3172" for this suite. Mar 13 13:50:45.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:50:45.450: INFO: namespace kubectl-3172 deletion completed in 22.074678304s • [SLOW TEST:28.877 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:50:45.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 13:50:45.514: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3953fa4e-e784-477b-ada5-87a53a0813ba" in namespace "downward-api-6389" to be "success or failure" Mar 13 13:50:45.536: INFO: Pod "downwardapi-volume-3953fa4e-e784-477b-ada5-87a53a0813ba": Phase="Pending", Reason="", readiness=false. Elapsed: 21.959276ms Mar 13 13:50:47.540: INFO: Pod "downwardapi-volume-3953fa4e-e784-477b-ada5-87a53a0813ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025640383s STEP: Saw pod success Mar 13 13:50:47.540: INFO: Pod "downwardapi-volume-3953fa4e-e784-477b-ada5-87a53a0813ba" satisfied condition "success or failure" Mar 13 13:50:47.543: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3953fa4e-e784-477b-ada5-87a53a0813ba container client-container: STEP: delete the pod Mar 13 13:50:47.575: INFO: Waiting for pod downwardapi-volume-3953fa4e-e784-477b-ada5-87a53a0813ba to disappear Mar 13 13:50:47.584: INFO: Pod downwardapi-volume-3953fa4e-e784-477b-ada5-87a53a0813ba no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:50:47.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6389" for this suite. Mar 13 13:50:53.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:50:53.676: INFO: namespace downward-api-6389 deletion completed in 6.088906566s • [SLOW TEST:8.225 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:50:53.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 13:50:53.754: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cce98762-5bd1-42a9-8fa9-2d449d7d5bfb" in namespace "projected-7636" to be "success or failure" Mar 13 13:50:53.770: INFO: Pod "downwardapi-volume-cce98762-5bd1-42a9-8fa9-2d449d7d5bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.805553ms Mar 13 13:50:55.774: INFO: Pod "downwardapi-volume-cce98762-5bd1-42a9-8fa9-2d449d7d5bfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019921154s STEP: Saw pod success Mar 13 13:50:55.774: INFO: Pod "downwardapi-volume-cce98762-5bd1-42a9-8fa9-2d449d7d5bfb" satisfied condition "success or failure" Mar 13 13:50:55.776: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cce98762-5bd1-42a9-8fa9-2d449d7d5bfb container client-container: STEP: delete the pod Mar 13 13:50:55.795: INFO: Waiting for pod downwardapi-volume-cce98762-5bd1-42a9-8fa9-2d449d7d5bfb to disappear Mar 13 13:50:55.799: INFO: Pod downwardapi-volume-cce98762-5bd1-42a9-8fa9-2d449d7d5bfb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:50:55.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7636" for this suite. Mar 13 13:51:01.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:51:01.891: INFO: namespace projected-7636 deletion completed in 6.089466347s • [SLOW TEST:8.215 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:51:01.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:51:04.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5347" for this suite. Mar 13 13:51:10.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:51:10.398: INFO: namespace emptydir-wrapper-5347 deletion completed in 6.097683016s • [SLOW TEST:8.506 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:51:10.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 13 13:51:12.475: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:51:12.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-474" for this suite. Mar 13 13:51:18.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:51:18.613: INFO: namespace container-runtime-474 deletion completed in 6.100704383s • [SLOW TEST:8.216 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:51:18.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Mar 13 13:51:18.688: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9703" to be "success or failure" Mar 13 13:51:18.690: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1.716758ms Mar 13 13:51:20.971: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.282800112s Mar 13 13:51:22.975: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.287081692s STEP: Saw pod success Mar 13 13:51:22.975: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 13 13:51:22.979: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 13 13:51:23.001: INFO: Waiting for pod pod-host-path-test to disappear Mar 13 13:51:23.004: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:51:23.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9703" for this suite. Mar 13 13:51:29.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:51:29.147: INFO: namespace hostpath-9703 deletion completed in 6.139192105s • [SLOW TEST:10.533 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:51:29.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 13:51:31.296: INFO: Waiting up to 5m0s for pod "client-envvars-6df5b86f-7f0e-4ed4-be5e-5bc3a4939622" in namespace "pods-7747" to be "success or failure" Mar 13 13:51:31.301: INFO: Pod "client-envvars-6df5b86f-7f0e-4ed4-be5e-5bc3a4939622": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445297ms Mar 13 13:51:33.304: INFO: Pod "client-envvars-6df5b86f-7f0e-4ed4-be5e-5bc3a4939622": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00724396s STEP: Saw pod success Mar 13 13:51:33.304: INFO: Pod "client-envvars-6df5b86f-7f0e-4ed4-be5e-5bc3a4939622" satisfied condition "success or failure" Mar 13 13:51:33.305: INFO: Trying to get logs from node iruya-worker pod client-envvars-6df5b86f-7f0e-4ed4-be5e-5bc3a4939622 container env3cont: STEP: delete the pod Mar 13 13:51:33.324: INFO: Waiting for pod client-envvars-6df5b86f-7f0e-4ed4-be5e-5bc3a4939622 to disappear Mar 13 13:51:33.327: INFO: Pod client-envvars-6df5b86f-7f0e-4ed4-be5e-5bc3a4939622 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:51:33.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7747" for this suite. Mar 13 13:52:11.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:52:11.408: INFO: namespace pods-7747 deletion completed in 38.078699785s • [SLOW TEST:42.261 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:52:11.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-963 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-963 to expose endpoints map[] Mar 13 13:52:11.494: INFO: successfully validated that service endpoint-test2 in namespace services-963 exposes endpoints map[] (16.855371ms elapsed) STEP: Creating pod pod1 in namespace services-963 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-963 to expose endpoints map[pod1:[80]] Mar 13 13:52:14.580: INFO: successfully validated that service endpoint-test2 in namespace services-963 exposes endpoints map[pod1:[80]] (3.050145833s elapsed) STEP: Creating pod pod2 in namespace services-963 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-963 to expose endpoints map[pod1:[80] pod2:[80]] Mar 13 13:52:16.620: INFO: successfully validated that service endpoint-test2 in namespace services-963 exposes endpoints map[pod1:[80] pod2:[80]] (2.036133446s elapsed) STEP: Deleting pod pod1 in namespace services-963 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-963 to expose endpoints map[pod2:[80]] Mar 13 13:52:17.658: INFO: successfully validated that service endpoint-test2 in namespace services-963 exposes endpoints map[pod2:[80]] (1.035616512s elapsed) STEP: Deleting pod pod2 in namespace services-963 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-963 to expose endpoints map[] Mar 13 13:52:18.675: INFO: successfully validated that service endpoint-test2 in namespace services-963 exposes endpoints map[] (1.013160665s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:52:18.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-963" for this suite. Mar 13 13:52:40.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:52:40.770: INFO: namespace services-963 deletion completed in 22.066586561s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:29.362 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:52:40.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7db8bbc0-4d84-4ed6-82f5-245acea8bec3 STEP: Creating a pod to test consume secrets Mar 13 13:52:40.832: INFO: Waiting up to 5m0s for pod "pod-secrets-aeed3321-55c5-43b6-bf97-c88178215a6b" in namespace "secrets-2006" to be "success or failure" Mar 13 13:52:40.846: INFO: Pod "pod-secrets-aeed3321-55c5-43b6-bf97-c88178215a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.120623ms Mar 13 13:52:42.849: INFO: Pod "pod-secrets-aeed3321-55c5-43b6-bf97-c88178215a6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016274792s STEP: Saw pod success Mar 13 13:52:42.849: INFO: Pod "pod-secrets-aeed3321-55c5-43b6-bf97-c88178215a6b" satisfied condition "success or failure" Mar 13 13:52:42.850: INFO: Trying to get logs from node iruya-worker pod pod-secrets-aeed3321-55c5-43b6-bf97-c88178215a6b container secret-env-test: STEP: delete the pod Mar 13 13:52:42.863: INFO: Waiting for pod pod-secrets-aeed3321-55c5-43b6-bf97-c88178215a6b to disappear Mar 13 13:52:42.868: INFO: Pod pod-secrets-aeed3321-55c5-43b6-bf97-c88178215a6b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:52:42.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2006" for this suite. Mar 13 13:52:48.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:52:48.981: INFO: namespace secrets-2006 deletion completed in 6.110180736s • [SLOW TEST:8.210 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:52:48.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:52:51.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4347" for this suite. Mar 13 13:53:35.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:53:35.179: INFO: namespace kubelet-test-4347 deletion completed in 44.117807746s • [SLOW TEST:46.198 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:53:35.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-gm5m STEP: Creating a pod to test atomic-volume-subpath Mar 13 13:53:35.271: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-gm5m" in namespace "subpath-974" to be "success or failure" Mar 13 13:53:35.276: INFO: Pod "pod-subpath-test-projected-gm5m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.773365ms Mar 13 13:53:37.279: INFO: Pod "pod-subpath-test-projected-gm5m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008147729s Mar 13 13:53:39.283: INFO: Pod "pod-subpath-test-projected-gm5m": Phase="Running", Reason="", readiness=true. Elapsed: 4.01195226s Mar 13 13:53:41.287: INFO: Pod "pod-subpath-test-projected-gm5m": Phase="Running", Reason="", readiness=true. Elapsed: 6.015822563s Mar 13 13:53:43.289: INFO: Pod "pod-subpath-test-projected-gm5m": Phase="Running", Reason="", readiness=true. Elapsed: 8.018193754s Mar 13 13:53:45.292: INFO: Pod "pod-subpath-test-projected-gm5m": Phase="Running", Reason="", readiness=true. Elapsed: 10.021249021s Mar 13 13:53:47.295: INFO: Pod "pod-subpath-test-projected-gm5m": Phase="Running", Reason="", readiness=true. Elapsed: 12.024471639s Mar 13 13:53:49.299: INFO: Pod "pod-subpath-test-projected-gm5m": Phase="Running", Reason="", readiness=true. Elapsed: 14.028030307s Mar 13 13:53:51.301: INFO: Pod "pod-subpath-test-projected-gm5m": Phase="Running", Reason="", readiness=true. Elapsed: 16.030686081s Mar 13 13:53:53.304: INFO: Pod "pod-subpath-test-projected-gm5m": Phase="Running", Reason="", readiness=true. Elapsed: 18.033470432s Mar 13 13:53:55.307: INFO: Pod "pod-subpath-test-projected-gm5m": Phase="Running", Reason="", readiness=true. Elapsed: 20.036474172s Mar 13 13:53:57.310: INFO: Pod "pod-subpath-test-projected-gm5m": Phase="Running", Reason="", readiness=true. Elapsed: 22.039191367s Mar 13 13:53:59.313: INFO: Pod "pod-subpath-test-projected-gm5m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.042129969s STEP: Saw pod success Mar 13 13:53:59.313: INFO: Pod "pod-subpath-test-projected-gm5m" satisfied condition "success or failure" Mar 13 13:53:59.315: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-gm5m container test-container-subpath-projected-gm5m: STEP: delete the pod Mar 13 13:53:59.343: INFO: Waiting for pod pod-subpath-test-projected-gm5m to disappear Mar 13 13:53:59.351: INFO: Pod pod-subpath-test-projected-gm5m no longer exists STEP: Deleting pod pod-subpath-test-projected-gm5m Mar 13 13:53:59.351: INFO: Deleting pod "pod-subpath-test-projected-gm5m" in namespace "subpath-974" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:53:59.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-974" for this suite. Mar 13 13:54:05.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:54:05.424: INFO: namespace subpath-974 deletion completed in 6.066597319s • [SLOW TEST:30.244 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:54:05.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-256.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-256.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 13 13:54:09.521: INFO: DNS probes using dns-256/dns-test-36448a34-ed72-4324-9582-bc0c52b073f2 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:54:09.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-256" for this suite. Mar 13 13:54:15.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:54:15.686: INFO: namespace dns-256 deletion completed in 6.128525777s • [SLOW TEST:10.262 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:54:15.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 13 13:54:15.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3947' Mar 13 13:54:16.028: INFO: stderr: "" Mar 13 13:54:16.028: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 13 13:54:16.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3947' Mar 13 13:54:16.140: INFO: stderr: "" Mar 13 13:54:16.140: INFO: stdout: "update-demo-nautilus-4gb6s update-demo-nautilus-f2q8v " Mar 13 13:54:16.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4gb6s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3947' Mar 13 13:54:16.221: INFO: stderr: "" Mar 13 13:54:16.221: INFO: stdout: "" Mar 13 13:54:16.221: INFO: update-demo-nautilus-4gb6s is created but not running Mar 13 13:54:21.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3947' Mar 13 13:54:21.335: INFO: stderr: "" Mar 13 13:54:21.335: INFO: stdout: "update-demo-nautilus-4gb6s update-demo-nautilus-f2q8v " Mar 13 13:54:21.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4gb6s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3947' Mar 13 13:54:21.408: INFO: stderr: "" Mar 13 13:54:21.408: INFO: stdout: "true" Mar 13 13:54:21.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4gb6s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3947' Mar 13 13:54:21.479: INFO: stderr: "" Mar 13 13:54:21.479: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 13 13:54:21.479: INFO: validating pod update-demo-nautilus-4gb6s Mar 13 13:54:21.481: INFO: got data: { "image": "nautilus.jpg" } Mar 13 13:54:21.481: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 13 13:54:21.481: INFO: update-demo-nautilus-4gb6s is verified up and running Mar 13 13:54:21.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f2q8v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3947' Mar 13 13:54:21.547: INFO: stderr: "" Mar 13 13:54:21.547: INFO: stdout: "true" Mar 13 13:54:21.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f2q8v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3947' Mar 13 13:54:21.609: INFO: stderr: "" Mar 13 13:54:21.609: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 13 13:54:21.609: INFO: validating pod update-demo-nautilus-f2q8v Mar 13 13:54:21.611: INFO: got data: { "image": "nautilus.jpg" } Mar 13 13:54:21.611: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 13 13:54:21.611: INFO: update-demo-nautilus-f2q8v is verified up and running STEP: scaling down the replication controller Mar 13 13:54:21.619: INFO: scanned /root for discovery docs: Mar 13 13:54:21.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3947' Mar 13 13:54:22.723: INFO: stderr: "" Mar 13 13:54:22.723: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 13 13:54:22.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3947' Mar 13 13:54:22.840: INFO: stderr: "" Mar 13 13:54:22.840: INFO: stdout: "update-demo-nautilus-4gb6s update-demo-nautilus-f2q8v " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 13 13:54:27.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3947' Mar 13 13:54:27.946: INFO: stderr: "" Mar 13 13:54:27.946: INFO: stdout: "update-demo-nautilus-4gb6s update-demo-nautilus-f2q8v " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 13 13:54:32.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3947' Mar 13 13:54:33.059: INFO: stderr: "" Mar 13 13:54:33.059: INFO: stdout: "update-demo-nautilus-4gb6s update-demo-nautilus-f2q8v " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 13 13:54:38.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3947' Mar 13 13:54:38.128: INFO: stderr: "" Mar 13 13:54:38.128: INFO: stdout: "update-demo-nautilus-4gb6s " Mar 13 13:54:38.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4gb6s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3947' Mar 13 13:54:38.211: INFO: stderr: "" Mar 13 13:54:38.211: INFO: stdout: "true" Mar 13 13:54:38.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4gb6s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3947' Mar 13 13:54:38.281: INFO: stderr: "" Mar 13 13:54:38.281: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 13 13:54:38.281: INFO: validating pod update-demo-nautilus-4gb6s Mar 13 13:54:38.283: INFO: got data: { "image": "nautilus.jpg" } Mar 13 13:54:38.283: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 13 13:54:38.283: INFO: update-demo-nautilus-4gb6s is verified up and running STEP: scaling up the replication controller Mar 13 13:54:38.284: INFO: scanned /root for discovery docs: Mar 13 13:54:38.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3947' Mar 13 13:54:39.371: INFO: stderr: "" Mar 13 13:54:39.371: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 13 13:54:39.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3947' Mar 13 13:54:39.458: INFO: stderr: "" Mar 13 13:54:39.458: INFO: stdout: "update-demo-nautilus-4gb6s update-demo-nautilus-88tkk " Mar 13 13:54:39.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4gb6s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3947' Mar 13 13:54:39.551: INFO: stderr: "" Mar 13 13:54:39.551: INFO: stdout: "true" Mar 13 13:54:39.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4gb6s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3947' Mar 13 13:54:39.636: INFO: stderr: "" Mar 13 13:54:39.636: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 13 13:54:39.636: INFO: validating pod update-demo-nautilus-4gb6s Mar 13 13:54:39.638: INFO: got data: { "image": "nautilus.jpg" } Mar 13 13:54:39.638: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 13 13:54:39.638: INFO: update-demo-nautilus-4gb6s is verified up and running Mar 13 13:54:39.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-88tkk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3947' Mar 13 13:54:39.704: INFO: stderr: "" Mar 13 13:54:39.704: INFO: stdout: "true" Mar 13 13:54:39.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-88tkk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3947' Mar 13 13:54:39.770: INFO: stderr: "" Mar 13 13:54:39.770: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 13 13:54:39.770: INFO: validating pod update-demo-nautilus-88tkk Mar 13 13:54:39.772: INFO: got data: { "image": "nautilus.jpg" } Mar 13 13:54:39.772: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 13 13:54:39.772: INFO: update-demo-nautilus-88tkk is verified up and running STEP: using delete to clean up resources Mar 13 13:54:39.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3947' Mar 13 13:54:39.845: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 13 13:54:39.845: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 13 13:54:39.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3947' Mar 13 13:54:39.924: INFO: stderr: "No resources found.\n" Mar 13 13:54:39.924: INFO: stdout: "" Mar 13 13:54:39.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3947 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 13 13:54:39.989: INFO: stderr: "" Mar 13 13:54:39.989: INFO: stdout: "update-demo-nautilus-4gb6s\nupdate-demo-nautilus-88tkk\n" Mar 13 13:54:40.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3947' Mar 13 13:54:40.574: INFO: stderr: "No resources found.\n" Mar 13 13:54:40.574: INFO: stdout: "" Mar 13 13:54:40.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3947 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 13 13:54:40.644: INFO: stderr: "" Mar 13 13:54:40.645: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:54:40.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3947" for this suite. Mar 13 13:55:02.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:55:02.753: INFO: namespace kubectl-3947 deletion completed in 22.106196484s • [SLOW TEST:47.066 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:55:02.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 13 13:55:05.543: INFO: Successfully updated pod "annotationupdatee49cbea0-ee47-4a41-8b84-ee4d6c2199a5" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:55:07.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5124" for this suite. Mar 13 13:55:29.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:55:29.661: INFO: namespace projected-5124 deletion completed in 22.080870987s • [SLOW TEST:26.907 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:55:29.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-adc2c00e-b024-4a57-8606-f6b7ab2678a4 STEP: Creating a pod to test consume configMaps Mar 13 13:55:29.726: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3bff71d3-9611-4c52-8161-16afa1f2b892" in namespace "projected-7617" to be "success or failure" Mar 13 13:55:29.731: INFO: Pod "pod-projected-configmaps-3bff71d3-9611-4c52-8161-16afa1f2b892": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337248ms Mar 13 13:55:31.735: INFO: Pod "pod-projected-configmaps-3bff71d3-9611-4c52-8161-16afa1f2b892": Phase="Running", Reason="", readiness=true. Elapsed: 2.008220645s Mar 13 13:55:33.738: INFO: Pod "pod-projected-configmaps-3bff71d3-9611-4c52-8161-16afa1f2b892": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011844632s STEP: Saw pod success Mar 13 13:55:33.738: INFO: Pod "pod-projected-configmaps-3bff71d3-9611-4c52-8161-16afa1f2b892" satisfied condition "success or failure" Mar 13 13:55:33.741: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-3bff71d3-9611-4c52-8161-16afa1f2b892 container projected-configmap-volume-test: STEP: delete the pod Mar 13 13:55:33.798: INFO: Waiting for pod pod-projected-configmaps-3bff71d3-9611-4c52-8161-16afa1f2b892 to disappear Mar 13 13:55:33.803: INFO: Pod pod-projected-configmaps-3bff71d3-9611-4c52-8161-16afa1f2b892 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:55:33.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7617" for this suite. Mar 13 13:55:39.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:55:39.900: INFO: namespace projected-7617 deletion completed in 6.093982104s • [SLOW TEST:10.239 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:55:39.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 13:55:39.992: INFO: Create a RollingUpdate DaemonSet Mar 13 13:55:39.996: INFO: Check that daemon pods launch on every node of the cluster Mar 13 13:55:40.002: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:55:40.020: INFO: Number of nodes with available pods: 0 Mar 13 13:55:40.020: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:55:41.046: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:55:41.049: INFO: Number of nodes with available pods: 0 Mar 13 13:55:41.049: INFO: Node iruya-worker is running more than one daemon pod Mar 13 13:55:42.024: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:55:42.027: INFO: Number of nodes with available pods: 2 Mar 13 13:55:42.027: INFO: Number of running nodes: 2, number of available pods: 2 Mar 13 13:55:42.027: INFO: Update the DaemonSet to trigger a rollout Mar 13 13:55:42.031: INFO: Updating DaemonSet daemon-set Mar 13 13:55:55.073: INFO: Roll back the DaemonSet before rollout is complete Mar 13 13:55:55.077: INFO: Updating DaemonSet daemon-set Mar 13 13:55:55.078: INFO: Make sure DaemonSet rollback is complete Mar 13 13:55:55.086: INFO: Wrong image for pod: daemon-set-pjcvp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 13 13:55:55.086: INFO: Pod daemon-set-pjcvp is not available Mar 13 13:55:55.092: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:55:56.096: INFO: Wrong image for pod: daemon-set-pjcvp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 13 13:55:56.096: INFO: Pod daemon-set-pjcvp is not available Mar 13 13:55:56.100: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 13:55:57.096: INFO: Pod daemon-set-46cgf is not available Mar 13 13:55:57.099: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9524, will wait for the garbage collector to delete the pods Mar 13 13:55:57.161: INFO: Deleting DaemonSet.extensions daemon-set took: 4.685561ms Mar 13 13:55:57.461: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.204987ms Mar 13 13:56:04.363: INFO: Number of nodes with available pods: 0 Mar 13 13:56:04.363: INFO: Number of running nodes: 0, number of available pods: 0 Mar 13 13:56:04.365: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9524/daemonsets","resourceVersion":"912466"},"items":null} Mar 13 13:56:04.367: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9524/pods","resourceVersion":"912466"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:56:04.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9524" for this suite. Mar 13 13:56:10.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:56:10.465: INFO: namespace daemonsets-9524 deletion completed in 6.089182084s • [SLOW TEST:30.565 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:56:10.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 13:56:10.528: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25d8a061-8554-4098-9fae-0dac8801b0a0" in namespace "projected-6594" to be "success or failure" Mar 13 13:56:10.530: INFO: Pod "downwardapi-volume-25d8a061-8554-4098-9fae-0dac8801b0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093335ms Mar 13 13:56:12.537: INFO: Pod "downwardapi-volume-25d8a061-8554-4098-9fae-0dac8801b0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009431554s Mar 13 13:56:14.540: INFO: Pod "downwardapi-volume-25d8a061-8554-4098-9fae-0dac8801b0a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012704648s STEP: Saw pod success Mar 13 13:56:14.540: INFO: Pod "downwardapi-volume-25d8a061-8554-4098-9fae-0dac8801b0a0" satisfied condition "success or failure" Mar 13 13:56:14.543: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-25d8a061-8554-4098-9fae-0dac8801b0a0 container client-container: STEP: delete the pod Mar 13 13:56:14.561: INFO: Waiting for pod downwardapi-volume-25d8a061-8554-4098-9fae-0dac8801b0a0 to disappear Mar 13 13:56:14.565: INFO: Pod downwardapi-volume-25d8a061-8554-4098-9fae-0dac8801b0a0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:56:14.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6594" for this suite. Mar 13 13:56:20.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:56:20.672: INFO: namespace projected-6594 deletion completed in 6.1024885s • [SLOW TEST:10.206 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:56:20.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5951, will wait for the garbage collector to delete the pods Mar 13 13:56:22.803: INFO: Deleting Job.batch foo took: 14.982642ms Mar 13 13:56:23.103: INFO: Terminating Job.batch foo pods took: 300.240705ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:56:56.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5951" for this suite. Mar 13 13:57:02.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:57:03.014: INFO: namespace job-5951 deletion completed in 6.104264608s • [SLOW TEST:42.341 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:57:03.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 13 13:57:03.145: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8546,SelfLink:/api/v1/namespaces/watch-8546/configmaps/e2e-watch-test-label-changed,UID:fc290b27-c693-465b-85d5-74cd39dd26d9,ResourceVersion:912687,Generation:0,CreationTimestamp:2020-03-13 13:57:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 13 13:57:03.145: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8546,SelfLink:/api/v1/namespaces/watch-8546/configmaps/e2e-watch-test-label-changed,UID:fc290b27-c693-465b-85d5-74cd39dd26d9,ResourceVersion:912688,Generation:0,CreationTimestamp:2020-03-13 13:57:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 13 13:57:03.145: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8546,SelfLink:/api/v1/namespaces/watch-8546/configmaps/e2e-watch-test-label-changed,UID:fc290b27-c693-465b-85d5-74cd39dd26d9,ResourceVersion:912689,Generation:0,CreationTimestamp:2020-03-13 13:57:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 13 13:57:13.240: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8546,SelfLink:/api/v1/namespaces/watch-8546/configmaps/e2e-watch-test-label-changed,UID:fc290b27-c693-465b-85d5-74cd39dd26d9,ResourceVersion:912710,Generation:0,CreationTimestamp:2020-03-13 13:57:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 13 13:57:13.241: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8546,SelfLink:/api/v1/namespaces/watch-8546/configmaps/e2e-watch-test-label-changed,UID:fc290b27-c693-465b-85d5-74cd39dd26d9,ResourceVersion:912711,Generation:0,CreationTimestamp:2020-03-13 13:57:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 13 13:57:13.241: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8546,SelfLink:/api/v1/namespaces/watch-8546/configmaps/e2e-watch-test-label-changed,UID:fc290b27-c693-465b-85d5-74cd39dd26d9,ResourceVersion:912712,Generation:0,CreationTimestamp:2020-03-13 13:57:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:57:13.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8546" for this suite. Mar 13 13:57:19.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:57:19.426: INFO: namespace watch-8546 deletion completed in 6.18146235s • [SLOW TEST:16.412 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:57:19.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 13:57:19.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5156' Mar 13 13:57:19.742: INFO: stderr: "" Mar 13 13:57:19.742: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 13 13:57:19.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5156' Mar 13 13:57:20.030: INFO: stderr: "" Mar 13 13:57:20.030: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 13 13:57:21.035: INFO: Selector matched 1 pods for map[app:redis] Mar 13 13:57:21.035: INFO: Found 1 / 1 Mar 13 13:57:21.035: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 13 13:57:21.038: INFO: Selector matched 1 pods for map[app:redis] Mar 13 13:57:21.038: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 13 13:57:21.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-44zpt --namespace=kubectl-5156' Mar 13 13:57:21.134: INFO: stderr: "" Mar 13 13:57:21.134: INFO: stdout: "Name: redis-master-44zpt\nNamespace: kubectl-5156\nPriority: 0\nNode: iruya-worker2/172.17.0.7\nStart Time: Fri, 13 Mar 2020 13:57:19 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.133\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://22f886befc0c8e389bf1ea273ad104b2ccbbb0a411aadac560609cf188b2808b\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 13 Mar 2020 13:57:20 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-v68m9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-v68m9:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-v68m9\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-5156/redis-master-44zpt to iruya-worker2\n Normal Pulled 1s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" Mar 13 13:57:21.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-5156' Mar 13 13:57:21.228: INFO: stderr: "" Mar 13 13:57:21.228: INFO: stdout: "Name: redis-master\nNamespace: kubectl-5156\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: redis-master-44zpt\n" Mar 13 13:57:21.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-5156' Mar 13 13:57:21.297: INFO: stderr: "" Mar 13 13:57:21.297: INFO: stdout: "Name: redis-master\nNamespace: kubectl-5156\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.111.26.28\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.133:6379\nSession Affinity: None\nEvents: \n" Mar 13 13:57:21.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Mar 13 13:57:21.377: INFO: stderr: "" Mar 13 13:57:21.377: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 14:39:09 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 13 Mar 2020 13:56:38 +0000 Sun, 08 Mar 2020 14:39:09 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 13 Mar 2020 13:56:38 +0000 Sun, 08 Mar 2020 14:39:09 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 13 Mar 2020 13:56:38 +0000 Sun, 08 Mar 2020 14:39:09 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 13 Mar 2020 13:56:38 +0000 Sun, 08 Mar 2020 14:39:40 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.8\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: 02c556471391403b9d1ff5a92e24de90\n System UUID: 23c4adc2-c7ef-4117-bc7b-74afff25f445\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-5d4dd4b4db-f26vw 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d23h\n kube-system coredns-5d4dd4b4db-t49n4 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d23h\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d23h\n kube-system kindnet-bjxs9 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 4d23h\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 4d23h\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 4d23h\n kube-system kube-proxy-hfxdn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d23h\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 4d23h\n local-path-storage local-path-provisioner-d4947b89c-j6x79 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d23h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 13 13:57:21.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5156' Mar 13 13:57:21.446: INFO: stderr: "" Mar 13 13:57:21.446: INFO: stdout: "Name: kubectl-5156\nLabels: e2e-framework=kubectl\n e2e-run=7613ec7f-fee1-41dc-b308-b26ac2430ce3\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:57:21.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5156" for this suite. Mar 13 13:57:43.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:57:43.571: INFO: namespace kubectl-5156 deletion completed in 22.123396469s • [SLOW TEST:24.145 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:57:43.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 13 13:57:45.688: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:57:45.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9528" for this suite. Mar 13 13:57:51.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:57:51.825: INFO: namespace container-runtime-9528 deletion completed in 6.099482182s • [SLOW TEST:8.253 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:57:51.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Mar 13 13:57:51.921: INFO: Waiting up to 5m0s for pod "var-expansion-01bb6ca4-677e-49b3-b73c-b9d5bcf1a3f1" in namespace "var-expansion-721" to be "success or failure" Mar 13 13:57:51.926: INFO: Pod "var-expansion-01bb6ca4-677e-49b3-b73c-b9d5bcf1a3f1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.726207ms Mar 13 13:57:53.931: INFO: Pod "var-expansion-01bb6ca4-677e-49b3-b73c-b9d5bcf1a3f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009836951s Mar 13 13:57:55.934: INFO: Pod "var-expansion-01bb6ca4-677e-49b3-b73c-b9d5bcf1a3f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013723286s STEP: Saw pod success Mar 13 13:57:55.934: INFO: Pod "var-expansion-01bb6ca4-677e-49b3-b73c-b9d5bcf1a3f1" satisfied condition "success or failure" Mar 13 13:57:55.937: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-01bb6ca4-677e-49b3-b73c-b9d5bcf1a3f1 container dapi-container: STEP: delete the pod Mar 13 13:57:55.963: INFO: Waiting for pod var-expansion-01bb6ca4-677e-49b3-b73c-b9d5bcf1a3f1 to disappear Mar 13 13:57:56.017: INFO: Pod var-expansion-01bb6ca4-677e-49b3-b73c-b9d5bcf1a3f1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:57:56.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-721" for this suite. Mar 13 13:58:02.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:58:02.106: INFO: namespace var-expansion-721 deletion completed in 6.085623986s • [SLOW TEST:10.281 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:58:02.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ab919a07-094a-47f9-b139-33345ec7d838 STEP: Creating a pod to test consume secrets Mar 13 13:58:02.211: INFO: Waiting up to 5m0s for pod "pod-secrets-e5a89f56-a564-4c1d-be9c-ef64afa747d8" in namespace "secrets-7616" to be "success or failure" Mar 13 13:58:02.227: INFO: Pod "pod-secrets-e5a89f56-a564-4c1d-be9c-ef64afa747d8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.913334ms Mar 13 13:58:04.230: INFO: Pod "pod-secrets-e5a89f56-a564-4c1d-be9c-ef64afa747d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018647186s STEP: Saw pod success Mar 13 13:58:04.230: INFO: Pod "pod-secrets-e5a89f56-a564-4c1d-be9c-ef64afa747d8" satisfied condition "success or failure" Mar 13 13:58:04.232: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-e5a89f56-a564-4c1d-be9c-ef64afa747d8 container secret-volume-test: STEP: delete the pod Mar 13 13:58:04.282: INFO: Waiting for pod pod-secrets-e5a89f56-a564-4c1d-be9c-ef64afa747d8 to disappear Mar 13 13:58:04.286: INFO: Pod pod-secrets-e5a89f56-a564-4c1d-be9c-ef64afa747d8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:58:04.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7616" for this suite. Mar 13 13:58:10.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:58:10.358: INFO: namespace secrets-7616 deletion completed in 6.069995028s • [SLOW TEST:8.252 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:58:10.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 13 13:58:10.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2998' Mar 13 13:58:12.325: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 13 13:58:12.325: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Mar 13 13:58:12.330: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 13 13:58:12.361: INFO: scanned /root for discovery docs: Mar 13 13:58:12.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2998' Mar 13 13:58:28.225: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 13 13:58:28.225: INFO: stdout: "Created e2e-test-nginx-rc-6a50ad0d2bc153957d17235a163730c2\nScaling up e2e-test-nginx-rc-6a50ad0d2bc153957d17235a163730c2 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-6a50ad0d2bc153957d17235a163730c2 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-6a50ad0d2bc153957d17235a163730c2 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 13 13:58:28.225: INFO: stdout: "Created e2e-test-nginx-rc-6a50ad0d2bc153957d17235a163730c2\nScaling up e2e-test-nginx-rc-6a50ad0d2bc153957d17235a163730c2 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-6a50ad0d2bc153957d17235a163730c2 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-6a50ad0d2bc153957d17235a163730c2 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 13 13:58:28.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2998' Mar 13 13:58:28.328: INFO: stderr: "" Mar 13 13:58:28.328: INFO: stdout: "e2e-test-nginx-rc-6a50ad0d2bc153957d17235a163730c2-rq288 " Mar 13 13:58:28.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-6a50ad0d2bc153957d17235a163730c2-rq288 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2998' Mar 13 13:58:28.395: INFO: stderr: "" Mar 13 13:58:28.395: INFO: stdout: "true" Mar 13 13:58:28.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-6a50ad0d2bc153957d17235a163730c2-rq288 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2998' Mar 13 13:58:28.464: INFO: stderr: "" Mar 13 13:58:28.464: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 13 13:58:28.464: INFO: e2e-test-nginx-rc-6a50ad0d2bc153957d17235a163730c2-rq288 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Mar 13 13:58:28.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2998' Mar 13 13:58:28.539: INFO: stderr: "" Mar 13 13:58:28.539: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 13:58:28.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2998" for this suite. Mar 13 13:58:50.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 13:58:50.647: INFO: namespace kubectl-2998 deletion completed in 22.100730655s • [SLOW TEST:40.288 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 13:58:50.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1878 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-1878 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1878 Mar 13 13:58:50.730: INFO: Found 0 stateful pods, waiting for 1 Mar 13 13:59:00.733: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 13 13:59:00.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 13 13:59:00.958: INFO: stderr: "I0313 13:59:00.858771 2285 log.go:172] (0xc000980370) (0xc0007a8820) Create stream\nI0313 13:59:00.858808 2285 log.go:172] (0xc000980370) (0xc0007a8820) Stream added, broadcasting: 1\nI0313 13:59:00.860469 2285 log.go:172] (0xc000980370) Reply frame received for 1\nI0313 13:59:00.860513 2285 log.go:172] (0xc000980370) (0xc0005cc1e0) Create stream\nI0313 13:59:00.860528 2285 log.go:172] (0xc000980370) (0xc0005cc1e0) Stream added, broadcasting: 3\nI0313 13:59:00.861337 2285 log.go:172] (0xc000980370) Reply frame received for 3\nI0313 13:59:00.861372 2285 log.go:172] (0xc000980370) (0xc000a76000) Create stream\nI0313 13:59:00.861386 2285 log.go:172] (0xc000980370) (0xc000a76000) Stream added, broadcasting: 5\nI0313 13:59:00.862009 2285 log.go:172] (0xc000980370) Reply frame received for 5\nI0313 13:59:00.931359 2285 log.go:172] (0xc000980370) Data frame received for 5\nI0313 13:59:00.931387 2285 log.go:172] (0xc000a76000) (5) Data frame handling\nI0313 13:59:00.931409 2285 log.go:172] (0xc000a76000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0313 13:59:00.955120 2285 log.go:172] (0xc000980370) Data frame received for 5\nI0313 13:59:00.955146 2285 log.go:172] (0xc000a76000) (5) Data frame handling\nI0313 13:59:00.955163 2285 log.go:172] (0xc000980370) Data frame received for 3\nI0313 13:59:00.955169 2285 log.go:172] (0xc0005cc1e0) (3) Data frame handling\nI0313 13:59:00.955176 2285 log.go:172] (0xc0005cc1e0) (3) Data frame sent\nI0313 13:59:00.955191 2285 log.go:172] (0xc000980370) Data frame received for 3\nI0313 13:59:00.955197 2285 log.go:172] (0xc0005cc1e0) (3) Data frame handling\nI0313 13:59:00.956236 2285 log.go:172] (0xc000980370) Data frame received for 1\nI0313 13:59:00.956255 2285 log.go:172] (0xc0007a8820) (1) Data frame handling\nI0313 13:59:00.956281 2285 log.go:172] (0xc0007a8820) (1) Data frame sent\nI0313 13:59:00.956296 2285 log.go:172] (0xc000980370) (0xc0007a8820) Stream removed, broadcasting: 1\nI0313 13:59:00.956317 2285 log.go:172] (0xc000980370) Go away received\nI0313 13:59:00.956676 2285 log.go:172] (0xc000980370) (0xc0007a8820) Stream removed, broadcasting: 1\nI0313 13:59:00.956700 2285 log.go:172] (0xc000980370) (0xc0005cc1e0) Stream removed, broadcasting: 3\nI0313 13:59:00.956717 2285 log.go:172] (0xc000980370) (0xc000a76000) Stream removed, broadcasting: 5\n" Mar 13 13:59:00.959: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 13 13:59:00.959: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 13 13:59:00.964: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 13 13:59:10.968: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 13 13:59:10.968: INFO: Waiting for statefulset status.replicas updated to 0 Mar 13 13:59:10.995: INFO: POD NODE PHASE GRACE CONDITIONS Mar 13 13:59:10.995: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC }] Mar 13 13:59:10.995: INFO: Mar 13 13:59:10.995: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 13 13:59:11.998: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.981782102s Mar 13 13:59:13.013: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.978168562s Mar 13 13:59:14.017: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.963436024s Mar 13 13:59:15.021: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.959313027s Mar 13 13:59:16.026: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.955343852s Mar 13 13:59:17.030: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.950722574s Mar 13 13:59:18.034: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.946279633s Mar 13 13:59:19.039: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.942412025s Mar 13 13:59:20.043: INFO: Verifying statefulset ss doesn't scale past 3 for another 937.843254ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1878 Mar 13 13:59:21.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 13:59:21.237: INFO: stderr: "I0313 13:59:21.162635 2304 log.go:172] (0xc000116790) (0xc000514960) Create stream\nI0313 13:59:21.162702 2304 log.go:172] (0xc000116790) (0xc000514960) Stream added, broadcasting: 1\nI0313 13:59:21.164807 2304 log.go:172] (0xc000116790) Reply frame received for 1\nI0313 13:59:21.164843 2304 log.go:172] (0xc000116790) (0xc000780000) Create stream\nI0313 13:59:21.164858 2304 log.go:172] (0xc000116790) (0xc000780000) Stream added, broadcasting: 3\nI0313 13:59:21.165664 2304 log.go:172] (0xc000116790) Reply frame received for 3\nI0313 13:59:21.165698 2304 log.go:172] (0xc000116790) (0xc000514a00) Create stream\nI0313 13:59:21.165712 2304 log.go:172] (0xc000116790) (0xc000514a00) Stream added, broadcasting: 5\nI0313 13:59:21.166654 2304 log.go:172] (0xc000116790) Reply frame received for 5\nI0313 13:59:21.232334 2304 log.go:172] (0xc000116790) Data frame received for 3\nI0313 13:59:21.232366 2304 log.go:172] (0xc000780000) (3) Data frame handling\nI0313 13:59:21.232375 2304 log.go:172] (0xc000780000) (3) Data frame sent\nI0313 13:59:21.232382 2304 log.go:172] (0xc000116790) Data frame received for 3\nI0313 13:59:21.232388 2304 log.go:172] (0xc000780000) (3) Data frame handling\nI0313 13:59:21.232408 2304 log.go:172] (0xc000116790) Data frame received for 5\nI0313 13:59:21.232414 2304 log.go:172] (0xc000514a00) (5) Data frame handling\nI0313 13:59:21.232421 2304 log.go:172] (0xc000514a00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0313 13:59:21.232458 2304 log.go:172] (0xc000116790) Data frame received for 5\nI0313 13:59:21.232473 2304 log.go:172] (0xc000514a00) (5) Data frame handling\nI0313 13:59:21.233915 2304 log.go:172] (0xc000116790) Data frame received for 1\nI0313 13:59:21.233932 2304 log.go:172] (0xc000514960) (1) Data frame handling\nI0313 13:59:21.233943 2304 log.go:172] (0xc000514960) (1) Data frame sent\nI0313 13:59:21.233950 2304 log.go:172] (0xc000116790) (0xc000514960) Stream removed, broadcasting: 1\nI0313 13:59:21.233963 2304 log.go:172] (0xc000116790) Go away received\nI0313 13:59:21.234247 2304 log.go:172] (0xc000116790) (0xc000514960) Stream removed, broadcasting: 1\nI0313 13:59:21.234264 2304 log.go:172] (0xc000116790) (0xc000780000) Stream removed, broadcasting: 3\nI0313 13:59:21.234271 2304 log.go:172] (0xc000116790) (0xc000514a00) Stream removed, broadcasting: 5\n" Mar 13 13:59:21.237: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 13 13:59:21.237: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 13 13:59:21.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 13:59:21.413: INFO: stderr: "I0313 13:59:21.335506 2323 log.go:172] (0xc000104dc0) (0xc00020a780) Create stream\nI0313 13:59:21.335545 2323 log.go:172] (0xc000104dc0) (0xc00020a780) Stream added, broadcasting: 1\nI0313 13:59:21.337353 2323 log.go:172] (0xc000104dc0) Reply frame received for 1\nI0313 13:59:21.337387 2323 log.go:172] (0xc000104dc0) (0xc0007a8000) Create stream\nI0313 13:59:21.337397 2323 log.go:172] (0xc000104dc0) (0xc0007a8000) Stream added, broadcasting: 3\nI0313 13:59:21.337971 2323 log.go:172] (0xc000104dc0) Reply frame received for 3\nI0313 13:59:21.338002 2323 log.go:172] (0xc000104dc0) (0xc0007a80a0) Create stream\nI0313 13:59:21.338012 2323 log.go:172] (0xc000104dc0) (0xc0007a80a0) Stream added, broadcasting: 5\nI0313 13:59:21.338763 2323 log.go:172] (0xc000104dc0) Reply frame received for 5\nI0313 13:59:21.408398 2323 log.go:172] (0xc000104dc0) Data frame received for 5\nI0313 13:59:21.408431 2323 log.go:172] (0xc0007a80a0) (5) Data frame handling\nI0313 13:59:21.408443 2323 log.go:172] (0xc0007a80a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0313 13:59:21.408459 2323 log.go:172] (0xc000104dc0) Data frame received for 3\nI0313 13:59:21.408464 2323 log.go:172] (0xc0007a8000) (3) Data frame handling\nI0313 13:59:21.408470 2323 log.go:172] (0xc0007a8000) (3) Data frame sent\nI0313 13:59:21.408475 2323 log.go:172] (0xc000104dc0) Data frame received for 3\nI0313 13:59:21.408479 2323 log.go:172] (0xc0007a8000) (3) Data frame handling\nI0313 13:59:21.408583 2323 log.go:172] (0xc000104dc0) Data frame received for 5\nI0313 13:59:21.408600 2323 log.go:172] (0xc0007a80a0) (5) Data frame handling\nI0313 13:59:21.410413 2323 log.go:172] (0xc000104dc0) Data frame received for 1\nI0313 13:59:21.410429 2323 log.go:172] (0xc00020a780) (1) Data frame handling\nI0313 13:59:21.410439 2323 log.go:172] (0xc00020a780) (1) Data frame sent\nI0313 13:59:21.410459 2323 log.go:172] (0xc000104dc0) (0xc00020a780) Stream removed, broadcasting: 1\nI0313 13:59:21.410471 2323 log.go:172] (0xc000104dc0) Go away received\nI0313 13:59:21.410809 2323 log.go:172] (0xc000104dc0) (0xc00020a780) Stream removed, broadcasting: 1\nI0313 13:59:21.410825 2323 log.go:172] (0xc000104dc0) (0xc0007a8000) Stream removed, broadcasting: 3\nI0313 13:59:21.410831 2323 log.go:172] (0xc000104dc0) (0xc0007a80a0) Stream removed, broadcasting: 5\n" Mar 13 13:59:21.413: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 13 13:59:21.413: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 13 13:59:21.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 13:59:21.568: INFO: stderr: "I0313 13:59:21.502034 2343 log.go:172] (0xc000aa0370) (0xc0009845a0) Create stream\nI0313 13:59:21.502072 2343 log.go:172] (0xc000aa0370) (0xc0009845a0) Stream added, broadcasting: 1\nI0313 13:59:21.503653 2343 log.go:172] (0xc000aa0370) Reply frame received for 1\nI0313 13:59:21.503676 2343 log.go:172] (0xc000aa0370) (0xc0005ee140) Create stream\nI0313 13:59:21.503683 2343 log.go:172] (0xc000aa0370) (0xc0005ee140) Stream added, broadcasting: 3\nI0313 13:59:21.504202 2343 log.go:172] (0xc000aa0370) Reply frame received for 3\nI0313 13:59:21.504219 2343 log.go:172] (0xc000aa0370) (0xc0009846e0) Create stream\nI0313 13:59:21.504225 2343 log.go:172] (0xc000aa0370) (0xc0009846e0) Stream added, broadcasting: 5\nI0313 13:59:21.504700 2343 log.go:172] (0xc000aa0370) Reply frame received for 5\nI0313 13:59:21.564158 2343 log.go:172] (0xc000aa0370) Data frame received for 3\nI0313 13:59:21.564181 2343 log.go:172] (0xc0005ee140) (3) Data frame handling\nI0313 13:59:21.564198 2343 log.go:172] (0xc0005ee140) (3) Data frame sent\nI0313 13:59:21.564250 2343 log.go:172] (0xc000aa0370) Data frame received for 5\nI0313 13:59:21.564263 2343 log.go:172] (0xc0009846e0) (5) Data frame handling\nI0313 13:59:21.564270 2343 log.go:172] (0xc0009846e0) (5) Data frame sent\nI0313 13:59:21.564276 2343 log.go:172] (0xc000aa0370) Data frame received for 5\nI0313 13:59:21.564281 2343 log.go:172] (0xc0009846e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0313 13:59:21.564306 2343 log.go:172] (0xc000aa0370) Data frame received for 3\nI0313 13:59:21.564317 2343 log.go:172] (0xc0005ee140) (3) Data frame handling\nI0313 13:59:21.565308 2343 log.go:172] (0xc000aa0370) Data frame received for 1\nI0313 13:59:21.565325 2343 log.go:172] (0xc0009845a0) (1) Data frame handling\nI0313 13:59:21.565332 2343 log.go:172] (0xc0009845a0) (1) Data frame sent\nI0313 13:59:21.565340 2343 log.go:172] (0xc000aa0370) (0xc0009845a0) Stream removed, broadcasting: 1\nI0313 13:59:21.565394 2343 log.go:172] (0xc000aa0370) Go away received\nI0313 13:59:21.565549 2343 log.go:172] (0xc000aa0370) (0xc0009845a0) Stream removed, broadcasting: 1\nI0313 13:59:21.565560 2343 log.go:172] (0xc000aa0370) (0xc0005ee140) Stream removed, broadcasting: 3\nI0313 13:59:21.565565 2343 log.go:172] (0xc000aa0370) (0xc0009846e0) Stream removed, broadcasting: 5\n" Mar 13 13:59:21.568: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 13 13:59:21.568: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 13 13:59:21.571: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 13 13:59:31.576: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 13 13:59:31.576: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 13 13:59:31.576: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 13 13:59:31.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 13 13:59:31.797: INFO: stderr: "I0313 13:59:31.709267 2363 log.go:172] (0xc000104fd0) (0xc00096a960) Create stream\nI0313 13:59:31.709313 2363 log.go:172] (0xc000104fd0) (0xc00096a960) Stream added, broadcasting: 1\nI0313 13:59:31.712950 2363 log.go:172] (0xc000104fd0) Reply frame received for 1\nI0313 13:59:31.713007 2363 log.go:172] (0xc000104fd0) (0xc00096a000) Create stream\nI0313 13:59:31.713017 2363 log.go:172] (0xc000104fd0) (0xc00096a000) Stream added, broadcasting: 3\nI0313 13:59:31.713939 2363 log.go:172] (0xc000104fd0) Reply frame received for 3\nI0313 13:59:31.713963 2363 log.go:172] (0xc000104fd0) (0xc000708320) Create stream\nI0313 13:59:31.713971 2363 log.go:172] (0xc000104fd0) (0xc000708320) Stream added, broadcasting: 5\nI0313 13:59:31.718826 2363 log.go:172] (0xc000104fd0) Reply frame received for 5\nI0313 13:59:31.792674 2363 log.go:172] (0xc000104fd0) Data frame received for 5\nI0313 13:59:31.792701 2363 log.go:172] (0xc000708320) (5) Data frame handling\nI0313 13:59:31.792715 2363 log.go:172] (0xc000708320) (5) Data frame sent\nI0313 13:59:31.792724 2363 log.go:172] (0xc000104fd0) Data frame received for 5\nI0313 13:59:31.792731 2363 log.go:172] (0xc000708320) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0313 13:59:31.792755 2363 log.go:172] (0xc000104fd0) Data frame received for 3\nI0313 13:59:31.792763 2363 log.go:172] (0xc00096a000) (3) Data frame handling\nI0313 13:59:31.792772 2363 log.go:172] (0xc00096a000) (3) Data frame sent\nI0313 13:59:31.792780 2363 log.go:172] (0xc000104fd0) Data frame received for 3\nI0313 13:59:31.792787 2363 log.go:172] (0xc00096a000) (3) Data frame handling\nI0313 13:59:31.793784 2363 log.go:172] (0xc000104fd0) Data frame received for 1\nI0313 13:59:31.793809 2363 log.go:172] (0xc00096a960) (1) Data frame handling\nI0313 13:59:31.793827 2363 log.go:172] (0xc00096a960) (1) Data frame sent\nI0313 13:59:31.793980 2363 log.go:172] (0xc000104fd0) (0xc00096a960) Stream removed, broadcasting: 1\nI0313 13:59:31.794002 2363 log.go:172] (0xc000104fd0) Go away received\nI0313 13:59:31.794332 2363 log.go:172] (0xc000104fd0) (0xc00096a960) Stream removed, broadcasting: 1\nI0313 13:59:31.794350 2363 log.go:172] (0xc000104fd0) (0xc00096a000) Stream removed, broadcasting: 3\nI0313 13:59:31.794358 2363 log.go:172] (0xc000104fd0) (0xc000708320) Stream removed, broadcasting: 5\n" Mar 13 13:59:31.797: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 13 13:59:31.797: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 13 13:59:31.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 13 13:59:31.985: INFO: stderr: "I0313 13:59:31.901793 2383 log.go:172] (0xc0009f6630) (0xc000658a00) Create stream\nI0313 13:59:31.901827 2383 log.go:172] (0xc0009f6630) (0xc000658a00) Stream added, broadcasting: 1\nI0313 13:59:31.903180 2383 log.go:172] (0xc0009f6630) Reply frame received for 1\nI0313 13:59:31.903215 2383 log.go:172] (0xc0009f6630) (0xc000658aa0) Create stream\nI0313 13:59:31.903222 2383 log.go:172] (0xc0009f6630) (0xc000658aa0) Stream added, broadcasting: 3\nI0313 13:59:31.904103 2383 log.go:172] (0xc0009f6630) Reply frame received for 3\nI0313 13:59:31.904131 2383 log.go:172] (0xc0009f6630) (0xc000658b40) Create stream\nI0313 13:59:31.904142 2383 log.go:172] (0xc0009f6630) (0xc000658b40) Stream added, broadcasting: 5\nI0313 13:59:31.904947 2383 log.go:172] (0xc0009f6630) Reply frame received for 5\nI0313 13:59:31.952255 2383 log.go:172] (0xc0009f6630) Data frame received for 5\nI0313 13:59:31.952295 2383 log.go:172] (0xc000658b40) (5) Data frame handling\nI0313 13:59:31.952321 2383 log.go:172] (0xc000658b40) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0313 13:59:31.978057 2383 log.go:172] (0xc0009f6630) Data frame received for 3\nI0313 13:59:31.978088 2383 log.go:172] (0xc000658aa0) (3) Data frame handling\nI0313 13:59:31.978103 2383 log.go:172] (0xc000658aa0) (3) Data frame sent\nI0313 13:59:31.978113 2383 log.go:172] (0xc0009f6630) Data frame received for 3\nI0313 13:59:31.978155 2383 log.go:172] (0xc000658aa0) (3) Data frame handling\nI0313 13:59:31.978775 2383 log.go:172] (0xc0009f6630) Data frame received for 5\nI0313 13:59:31.978799 2383 log.go:172] (0xc000658b40) (5) Data frame handling\nI0313 13:59:31.980812 2383 log.go:172] (0xc0009f6630) Data frame received for 1\nI0313 13:59:31.980835 2383 log.go:172] (0xc000658a00) (1) Data frame handling\nI0313 13:59:31.980844 2383 log.go:172] (0xc000658a00) (1) Data frame sent\nI0313 13:59:31.980877 2383 log.go:172] (0xc0009f6630) (0xc000658a00) Stream removed, broadcasting: 1\nI0313 13:59:31.980896 2383 log.go:172] (0xc0009f6630) Go away received\nI0313 13:59:31.981296 2383 log.go:172] (0xc0009f6630) (0xc000658a00) Stream removed, broadcasting: 1\nI0313 13:59:31.981320 2383 log.go:172] (0xc0009f6630) (0xc000658aa0) Stream removed, broadcasting: 3\nI0313 13:59:31.981337 2383 log.go:172] (0xc0009f6630) (0xc000658b40) Stream removed, broadcasting: 5\n" Mar 13 13:59:31.985: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 13 13:59:31.985: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 13 13:59:31.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 13 13:59:32.198: INFO: stderr: "I0313 13:59:32.101622 2402 log.go:172] (0xc0009ba160) (0xc000812be0) Create stream\nI0313 13:59:32.101682 2402 log.go:172] (0xc0009ba160) (0xc000812be0) Stream added, broadcasting: 1\nI0313 13:59:32.103685 2402 log.go:172] (0xc0009ba160) Reply frame received for 1\nI0313 13:59:32.103723 2402 log.go:172] (0xc0009ba160) (0xc0008d8000) Create stream\nI0313 13:59:32.103732 2402 log.go:172] (0xc0009ba160) (0xc0008d8000) Stream added, broadcasting: 3\nI0313 13:59:32.104459 2402 log.go:172] (0xc0009ba160) Reply frame received for 3\nI0313 13:59:32.104497 2402 log.go:172] (0xc0009ba160) (0xc0001d4000) Create stream\nI0313 13:59:32.104510 2402 log.go:172] (0xc0009ba160) (0xc0001d4000) Stream added, broadcasting: 5\nI0313 13:59:32.105120 2402 log.go:172] (0xc0009ba160) Reply frame received for 5\nI0313 13:59:32.155802 2402 log.go:172] (0xc0009ba160) Data frame received for 5\nI0313 13:59:32.155821 2402 log.go:172] (0xc0001d4000) (5) Data frame handling\nI0313 13:59:32.155830 2402 log.go:172] (0xc0001d4000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0313 13:59:32.193525 2402 log.go:172] (0xc0009ba160) Data frame received for 3\nI0313 13:59:32.193543 2402 log.go:172] (0xc0008d8000) (3) Data frame handling\nI0313 13:59:32.193568 2402 log.go:172] (0xc0008d8000) (3) Data frame sent\nI0313 13:59:32.194088 2402 log.go:172] (0xc0009ba160) Data frame received for 5\nI0313 13:59:32.194106 2402 log.go:172] (0xc0001d4000) (5) Data frame handling\nI0313 13:59:32.194147 2402 log.go:172] (0xc0009ba160) Data frame received for 3\nI0313 13:59:32.194160 2402 log.go:172] (0xc0008d8000) (3) Data frame handling\nI0313 13:59:32.194532 2402 log.go:172] (0xc0009ba160) Data frame received for 1\nI0313 13:59:32.194560 2402 log.go:172] (0xc000812be0) (1) Data frame handling\nI0313 13:59:32.194573 2402 log.go:172] (0xc000812be0) (1) Data frame sent\nI0313 13:59:32.194584 2402 log.go:172] (0xc0009ba160) (0xc000812be0) Stream removed, broadcasting: 1\nI0313 13:59:32.194596 2402 log.go:172] (0xc0009ba160) Go away received\nI0313 13:59:32.194828 2402 log.go:172] (0xc0009ba160) (0xc000812be0) Stream removed, broadcasting: 1\nI0313 13:59:32.194837 2402 log.go:172] (0xc0009ba160) (0xc0008d8000) Stream removed, broadcasting: 3\nI0313 13:59:32.194841 2402 log.go:172] (0xc0009ba160) (0xc0001d4000) Stream removed, broadcasting: 5\n" Mar 13 13:59:32.198: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 13 13:59:32.198: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 13 13:59:32.198: INFO: Waiting for statefulset status.replicas updated to 0 Mar 13 13:59:32.200: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 13 13:59:42.205: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 13 13:59:42.205: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 13 13:59:42.205: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 13 13:59:42.217: INFO: POD NODE PHASE GRACE CONDITIONS Mar 13 13:59:42.217: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC }] Mar 13 13:59:42.217: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:10 +0000 UTC }] Mar 13 13:59:42.217: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC }] Mar 13 13:59:42.217: INFO: Mar 13 13:59:42.217: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 13 13:59:43.222: INFO: POD NODE PHASE GRACE CONDITIONS Mar 13 13:59:43.222: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC }] Mar 13 13:59:43.222: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:10 +0000 UTC }] Mar 13 13:59:43.222: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC }] Mar 13 13:59:43.222: INFO: Mar 13 13:59:43.222: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 13 13:59:44.235: INFO: POD NODE PHASE GRACE CONDITIONS Mar 13 13:59:44.235: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC }] Mar 13 13:59:44.235: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:10 +0000 UTC }] Mar 13 13:59:44.235: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC }] Mar 13 13:59:44.235: INFO: Mar 13 13:59:44.235: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 13 13:59:45.240: INFO: POD NODE PHASE GRACE CONDITIONS Mar 13 13:59:45.240: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC }] Mar 13 13:59:45.240: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:10 +0000 UTC }] Mar 13 13:59:45.240: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC }] Mar 13 13:59:45.240: INFO: Mar 13 13:59:45.240: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 13 13:59:46.244: INFO: POD NODE PHASE GRACE CONDITIONS Mar 13 13:59:46.244: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC }] Mar 13 13:59:46.244: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:10 +0000 UTC }] Mar 13 13:59:46.244: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC }] Mar 13 13:59:46.244: INFO: Mar 13 13:59:46.244: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 13 13:59:47.248: INFO: POD NODE PHASE GRACE CONDITIONS Mar 13 13:59:47.248: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC }] Mar 13 13:59:47.249: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:10 +0000 UTC }] Mar 13 13:59:47.249: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC }] Mar 13 13:59:47.249: INFO: Mar 13 13:59:47.249: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 13 13:59:48.252: INFO: POD NODE PHASE GRACE CONDITIONS Mar 13 13:59:48.252: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC }] Mar 13 13:59:48.252: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:10 +0000 UTC }] Mar 13 13:59:48.252: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC }] Mar 13 13:59:48.252: INFO: Mar 13 13:59:48.252: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 13 13:59:49.256: INFO: POD NODE PHASE GRACE CONDITIONS Mar 13 13:59:49.256: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC }] Mar 13 13:59:49.256: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:10 +0000 UTC }] Mar 13 13:59:49.256: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC }] Mar 13 13:59:49.256: INFO: Mar 13 13:59:49.256: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 13 13:59:50.260: INFO: POD NODE PHASE GRACE CONDITIONS Mar 13 13:59:50.260: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC }] Mar 13 13:59:50.261: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:10 +0000 UTC }] Mar 13 13:59:50.261: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC }] Mar 13 13:59:50.261: INFO: Mar 13 13:59:50.261: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 13 13:59:51.265: INFO: POD NODE PHASE GRACE CONDITIONS Mar 13 13:59:51.265: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:58:50 +0000 UTC }] Mar 13 13:59:51.265: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:10 +0000 UTC }] Mar 13 13:59:51.265: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 13:59:11 +0000 UTC }] Mar 13 13:59:51.265: INFO: Mar 13 13:59:51.265: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1878 Mar 13 13:59:52.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 13:59:52.380: INFO: rc: 1 Mar 13 13:59:52.381: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002023dd0 exit status 1 true [0xc002c58608 0xc002c58620 0xc002c58638] [0xc002c58608 0xc002c58620 0xc002c58638] [0xc002c58618 0xc002c58630] [0xba70e0 0xba70e0] 0xc002ba1e60 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Mar 13 14:00:02.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:00:02.473: INFO: rc: 1 Mar 13 14:00:02.473: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002023e90 exit status 1 true [0xc002c58640 0xc002c58658 0xc002c58670] [0xc002c58640 0xc002c58658 0xc002c58670] [0xc002c58650 0xc002c58668] [0xba70e0 0xba70e0] 0xc0033e03c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:00:12.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:00:12.555: INFO: rc: 1 Mar 13 14:00:12.555: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014a6a80 exit status 1 true [0xc002568428 0xc002568440 0xc002568460] [0xc002568428 0xc002568440 0xc002568460] [0xc002568438 0xc002568450] [0xba70e0 0xba70e0] 0xc002570000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:00:22.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:00:22.733: INFO: rc: 1 Mar 13 14:00:22.733: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006679b0 exit status 1 true [0xc000010b78 0xc000010d10 0xc000010e58] [0xc000010b78 0xc000010d10 0xc000010e58] [0xc000010c20 0xc000010e00] [0xba70e0 0xba70e0] 0xc001c13f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:00:32.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:00:32.844: INFO: rc: 1 Mar 13 14:00:32.844: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d9e090 exit status 1 true [0xc0003621a0 0xc0003622d8 0xc002568008] [0xc0003621a0 0xc0003622d8 0xc002568008] [0xc000362268 0xc002568000] [0xba70e0 0xba70e0] 0xc002ba0ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:00:42.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:00:42.935: INFO: rc: 1 Mar 13 14:00:42.935: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e90090 exit status 1 true [0xc002644020 0xc0026440c0 0xc0026440e0] [0xc002644020 0xc0026440c0 0xc0026440e0] [0xc0026440b8 0xc0026440d0] [0xba70e0 0xba70e0] 0xc00136c600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:00:52.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:00:53.096: INFO: rc: 1 Mar 13 14:00:53.096: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d9e150 exit status 1 true [0xc002568010 0xc002568028 0xc002568040] [0xc002568010 0xc002568028 0xc002568040] [0xc002568020 0xc002568038] [0xba70e0 0xba70e0] 0xc002ba0de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:01:03.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:01:03.202: INFO: rc: 1 Mar 13 14:01:03.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e90150 exit status 1 true [0xc0026440e8 0xc002644108 0xc002644178] [0xc0026440e8 0xc002644108 0xc002644178] [0xc0026440f8 0xc002644160] [0xba70e0 0xba70e0] 0xc002d50180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:01:13.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:01:13.316: INFO: rc: 1 Mar 13 14:01:13.316: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000667b00 exit status 1 true [0xc000010f38 0xc0000110b0 0xc0000114a0] [0xc000010f38 0xc0000110b0 0xc0000114a0] [0xc000011000 0xc0000112a8] [0xba70e0 0xba70e0] 0xc0023b4360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:01:23.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:01:23.397: INFO: rc: 1 Mar 13 14:01:23.397: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000667bc0 exit status 1 true [0xc000011518 0xc000011618 0xc000011818] [0xc000011518 0xc000011618 0xc000011818] [0xc0000115d8 0xc0000117f8] [0xba70e0 0xba70e0] 0xc0023b46c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:01:33.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:01:33.496: INFO: rc: 1 Mar 13 14:01:33.496: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000667cb0 exit status 1 true [0xc000011960 0xc000011ca8 0xc00121a008] [0xc000011960 0xc000011ca8 0xc00121a008] [0xc000011c78 0xc000011db8] [0xba70e0 0xba70e0] 0xc0023b4b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:01:43.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:01:43.607: INFO: rc: 1 Mar 13 14:01:43.607: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d02180 exit status 1 true [0xc002c58000 0xc002c58018 0xc002c58030] [0xc002c58000 0xc002c58018 0xc002c58030] [0xc002c58010 0xc002c58028] [0xba70e0 0xba70e0] 0xc002bf9a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:01:53.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:01:53.714: INFO: rc: 1 Mar 13 14:01:53.714: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d9e270 exit status 1 true [0xc002568048 0xc002568060 0xc002568078] [0xc002568048 0xc002568060 0xc002568078] [0xc002568058 0xc002568070] [0xba70e0 0xba70e0] 0xc002ba10e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:02:03.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:02:03.829: INFO: rc: 1 Mar 13 14:02:03.829: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000667da0 exit status 1 true [0xc00121a018 0xc00121a0e8 0xc00121a130] [0xc00121a018 0xc00121a0e8 0xc00121a130] [0xc00121a0b8 0xc00121a128] [0xba70e0 0xba70e0] 0xc0023b4e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:02:13.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:02:13.943: INFO: rc: 1 Mar 13 14:02:13.943: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d02240 exit status 1 true [0xc002c58038 0xc002c58050 0xc002c58068] [0xc002c58038 0xc002c58050 0xc002c58068] [0xc002c58048 0xc002c58060] [0xba70e0 0xba70e0] 0xc002bf9d40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:02:23.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:02:24.057: INFO: rc: 1 Mar 13 14:02:24.057: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002ec6000 exit status 1 true [0xc0026441a0 0xc002644208 0xc002644240] [0xc0026441a0 0xc002644208 0xc002644240] [0xc0026441d8 0xc002644228] [0xba70e0 0xba70e0] 0xc0022ca120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:02:34.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:02:34.163: INFO: rc: 1 Mar 13 14:02:34.163: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e900c0 exit status 1 true [0xc0003621a0 0xc0003622d8 0xc000010b78] [0xc0003621a0 0xc0003622d8 0xc000010b78] [0xc000362268 0xc0000103d8] [0xba70e0 0xba70e0] 0xc00136c600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:02:44.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:02:44.268: INFO: rc: 1 Mar 13 14:02:44.268: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e901b0 exit status 1 true [0xc000010bb0 0xc000010db8 0xc000010f38] [0xc000010bb0 0xc000010db8 0xc000010f38] [0xc000010d10 0xc000010e58] [0xba70e0 0xba70e0] 0xc001c136e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:02:54.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:02:54.380: INFO: rc: 1 Mar 13 14:02:54.380: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e90360 exit status 1 true [0xc000010fb0 0xc000011120 0xc000011518] [0xc000010fb0 0xc000011120 0xc000011518] [0xc0000110b0 0xc0000114a0] [0xba70e0 0xba70e0] 0xc0022cb080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:03:04.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:03:04.470: INFO: rc: 1 Mar 13 14:03:04.470: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002ec60f0 exit status 1 true [0xc002644020 0xc0026440c0 0xc0026440e0] [0xc002644020 0xc0026440c0 0xc0026440e0] [0xc0026440b8 0xc0026440d0] [0xba70e0 0xba70e0] 0xc002d50840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:03:14.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:03:14.566: INFO: rc: 1 Mar 13 14:03:14.566: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e90450 exit status 1 true [0xc000011580 0xc000011778 0xc000011960] [0xc000011580 0xc000011778 0xc000011960] [0xc000011618 0xc000011818] [0xba70e0 0xba70e0] 0xc002bf8660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:03:24.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:03:24.636: INFO: rc: 1 Mar 13 14:03:24.636: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e90540 exit status 1 true [0xc000011b50 0xc000011cf8 0xc002c58008] [0xc000011b50 0xc000011cf8 0xc002c58008] [0xc000011ca8 0xc002c58000] [0xba70e0 0xba70e0] 0xc002bf9b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:03:34.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:03:34.714: INFO: rc: 1 Mar 13 14:03:34.714: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002ec61e0 exit status 1 true [0xc0026440e8 0xc002644108 0xc002644178] [0xc0026440e8 0xc002644108 0xc002644178] [0xc0026440f8 0xc002644160] [0xba70e0 0xba70e0] 0xc002d50b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:03:44.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:03:44.815: INFO: rc: 1 Mar 13 14:03:44.815: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006679e0 exit status 1 true [0xc00121a008 0xc00121a0b8 0xc00121a128] [0xc00121a008 0xc00121a0b8 0xc00121a128] [0xc00121a0a0 0xc00121a120] [0xba70e0 0xba70e0] 0xc0023b4300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:03:54.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:03:54.936: INFO: rc: 1 Mar 13 14:03:54.936: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000667aa0 exit status 1 true [0xc00121a130 0xc00121a1f0 0xc00121a260] [0xc00121a130 0xc00121a1f0 0xc00121a260] [0xc00121a1d0 0xc00121a230] [0xba70e0 0xba70e0] 0xc0023b4660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:04:04.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:04:05.049: INFO: rc: 1 Mar 13 14:04:05.049: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e90630 exit status 1 true [0xc002c58010 0xc002c58028 0xc002c58040] [0xc002c58010 0xc002c58028 0xc002c58040] [0xc002c58020 0xc002c58038] [0xba70e0 0xba70e0] 0xc002bf9e00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:04:15.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:04:15.161: INFO: rc: 1 Mar 13 14:04:15.161: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e906f0 exit status 1 true [0xc002c58048 0xc002c58060 0xc002c58078] [0xc002c58048 0xc002c58060 0xc002c58078] [0xc002c58058 0xc002c58070] [0xba70e0 0xba70e0] 0xc002ba09c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:04:25.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:04:25.283: INFO: rc: 1 Mar 13 14:04:25.283: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d020c0 exit status 1 true [0xc000010b78 0xc000010d10 0xc000010e58] [0xc000010b78 0xc000010d10 0xc000010e58] [0xc000010c20 0xc000010e00] [0xba70e0 0xba70e0] 0xc002bf9a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:04:35.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:04:35.364: INFO: rc: 1 Mar 13 14:04:35.364: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d021b0 exit status 1 true [0xc000010f38 0xc0000110b0 0xc0000114a0] [0xc000010f38 0xc0000110b0 0xc0000114a0] [0xc000011000 0xc0000112a8] [0xba70e0 0xba70e0] 0xc002bf9d40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:04:45.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:04:45.444: INFO: rc: 1 Mar 13 14:04:45.444: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d02270 exit status 1 true [0xc000011518 0xc000011618 0xc000011818] [0xc000011518 0xc000011618 0xc000011818] [0xc0000115d8 0xc0000117f8] [0xba70e0 0xba70e0] 0xc0022ca360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 13 14:04:55.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:04:55.557: INFO: rc: 1 Mar 13 14:04:55.557: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Mar 13 14:04:55.557: INFO: Scaling statefulset ss to 0 Mar 13 14:04:55.563: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 13 14:04:55.565: INFO: Deleting all statefulset in ns statefulset-1878 Mar 13 14:04:55.567: INFO: Scaling statefulset ss to 0 Mar 13 14:04:55.572: INFO: Waiting for statefulset status.replicas updated to 0 Mar 13 14:04:55.573: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:04:55.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1878" for this suite. Mar 13 14:05:01.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:05:01.685: INFO: namespace statefulset-1878 deletion completed in 6.082887926s • [SLOW TEST:371.038 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:05:01.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 13 14:05:01.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5953' Mar 13 14:05:01.974: INFO: stderr: "" Mar 13 14:05:01.974: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 13 14:05:02.977: INFO: Selector matched 1 pods for map[app:redis] Mar 13 14:05:02.977: INFO: Found 0 / 1 Mar 13 14:05:03.979: INFO: Selector matched 1 pods for map[app:redis] Mar 13 14:05:03.979: INFO: Found 1 / 1 Mar 13 14:05:03.979: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 13 14:05:03.982: INFO: Selector matched 1 pods for map[app:redis] Mar 13 14:05:03.982: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 13 14:05:03.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-nmv7r --namespace=kubectl-5953 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 13 14:05:04.098: INFO: stderr: "" Mar 13 14:05:04.099: INFO: stdout: "pod/redis-master-nmv7r patched\n" STEP: checking annotations Mar 13 14:05:04.102: INFO: Selector matched 1 pods for map[app:redis] Mar 13 14:05:04.102: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:05:04.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5953" for this suite. Mar 13 14:05:26.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:05:26.195: INFO: namespace kubectl-5953 deletion completed in 22.089656427s • [SLOW TEST:24.510 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:05:26.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 13 14:05:26.248: INFO: Waiting up to 5m0s for pod "pod-d9555e4a-4f03-4cdc-9adb-dbec1ce17276" in namespace "emptydir-1742" to be "success or failure" Mar 13 14:05:26.250: INFO: Pod "pod-d9555e4a-4f03-4cdc-9adb-dbec1ce17276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058381ms Mar 13 14:05:28.255: INFO: Pod "pod-d9555e4a-4f03-4cdc-9adb-dbec1ce17276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006247608s STEP: Saw pod success Mar 13 14:05:28.255: INFO: Pod "pod-d9555e4a-4f03-4cdc-9adb-dbec1ce17276" satisfied condition "success or failure" Mar 13 14:05:28.257: INFO: Trying to get logs from node iruya-worker2 pod pod-d9555e4a-4f03-4cdc-9adb-dbec1ce17276 container test-container: STEP: delete the pod Mar 13 14:05:28.293: INFO: Waiting for pod pod-d9555e4a-4f03-4cdc-9adb-dbec1ce17276 to disappear Mar 13 14:05:28.298: INFO: Pod pod-d9555e4a-4f03-4cdc-9adb-dbec1ce17276 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:05:28.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1742" for this suite. Mar 13 14:05:34.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:05:34.403: INFO: namespace emptydir-1742 deletion completed in 6.102335037s • [SLOW TEST:8.207 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:05:34.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 14:05:34.508: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"845bb132-3e64-4640-a8b2-e8bfb1c6782e", Controller:(*bool)(0xc0005c47b2), BlockOwnerDeletion:(*bool)(0xc0005c47b3)}} Mar 13 14:05:34.537: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f103508b-edaa-444f-9894-389920d27c2c", Controller:(*bool)(0xc0031db07a), BlockOwnerDeletion:(*bool)(0xc0031db07b)}} Mar 13 14:05:34.540: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"6330bd9a-7f73-41e7-a782-681c41bd668a", Controller:(*bool)(0xc0031db20a), BlockOwnerDeletion:(*bool)(0xc0031db20b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:05:39.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6103" for this suite. Mar 13 14:05:45.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:05:45.656: INFO: namespace gc-6103 deletion completed in 6.083648838s • [SLOW TEST:11.253 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:05:45.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 13 14:05:45.848: INFO: Waiting up to 5m0s for pod "pod-6b3e1c94-a5df-4392-b459-85bc65ebeaad" in namespace "emptydir-1090" to be "success or failure" Mar 13 14:05:45.880: INFO: Pod "pod-6b3e1c94-a5df-4392-b459-85bc65ebeaad": Phase="Pending", Reason="", readiness=false. Elapsed: 32.561256ms Mar 13 14:05:47.884: INFO: Pod "pod-6b3e1c94-a5df-4392-b459-85bc65ebeaad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.036007039s STEP: Saw pod success Mar 13 14:05:47.884: INFO: Pod "pod-6b3e1c94-a5df-4392-b459-85bc65ebeaad" satisfied condition "success or failure" Mar 13 14:05:47.886: INFO: Trying to get logs from node iruya-worker2 pod pod-6b3e1c94-a5df-4392-b459-85bc65ebeaad container test-container: STEP: delete the pod Mar 13 14:05:47.907: INFO: Waiting for pod pod-6b3e1c94-a5df-4392-b459-85bc65ebeaad to disappear Mar 13 14:05:47.912: INFO: Pod pod-6b3e1c94-a5df-4392-b459-85bc65ebeaad no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:05:47.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1090" for this suite. Mar 13 14:05:53.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:05:54.014: INFO: namespace emptydir-1090 deletion completed in 6.099951518s • [SLOW TEST:8.358 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:05:54.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Mar 13 14:05:54.057: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:05:54.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3344" for this suite. Mar 13 14:06:00.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:06:00.230: INFO: namespace kubectl-3344 deletion completed in 6.092876168s • [SLOW TEST:6.215 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:06:00.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 13 14:06:03.319: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:06:03.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6249" for this suite. Mar 13 14:06:09.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:06:09.442: INFO: namespace container-runtime-6249 deletion completed in 6.081226006s • [SLOW TEST:9.212 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:06:09.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 13 14:06:09.486: INFO: namespace kubectl-9883 Mar 13 14:06:09.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9883' Mar 13 14:06:09.728: INFO: stderr: "" Mar 13 14:06:09.728: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 13 14:06:10.736: INFO: Selector matched 1 pods for map[app:redis] Mar 13 14:06:10.736: INFO: Found 0 / 1 Mar 13 14:06:11.732: INFO: Selector matched 1 pods for map[app:redis] Mar 13 14:06:11.732: INFO: Found 1 / 1 Mar 13 14:06:11.732: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 13 14:06:11.755: INFO: Selector matched 1 pods for map[app:redis] Mar 13 14:06:11.755: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 13 14:06:11.755: INFO: wait on redis-master startup in kubectl-9883 Mar 13 14:06:11.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kw2hc redis-master --namespace=kubectl-9883' Mar 13 14:06:11.866: INFO: stderr: "" Mar 13 14:06:11.866: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 13 Mar 14:06:10.888 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 13 Mar 14:06:10.888 # Server started, Redis version 3.2.12\n1:M 13 Mar 14:06:10.888 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 13 Mar 14:06:10.888 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 13 14:06:11.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9883' Mar 13 14:06:11.989: INFO: stderr: "" Mar 13 14:06:11.989: INFO: stdout: "service/rm2 exposed\n" Mar 13 14:06:11.994: INFO: Service rm2 in namespace kubectl-9883 found. STEP: exposing service Mar 13 14:06:14.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9883' Mar 13 14:06:14.209: INFO: stderr: "" Mar 13 14:06:14.209: INFO: stdout: "service/rm3 exposed\n" Mar 13 14:06:14.221: INFO: Service rm3 in namespace kubectl-9883 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:06:16.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9883" for this suite. Mar 13 14:06:38.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:06:38.305: INFO: namespace kubectl-9883 deletion completed in 22.077623015s • [SLOW TEST:28.863 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:06:38.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 13 14:06:38.360: INFO: Waiting up to 5m0s for pod "pod-38d9e5b1-49cd-453b-99a9-c0058b46a029" in namespace "emptydir-7301" to be "success or failure" Mar 13 14:06:38.365: INFO: Pod "pod-38d9e5b1-49cd-453b-99a9-c0058b46a029": Phase="Pending", Reason="", readiness=false. Elapsed: 5.092483ms Mar 13 14:06:40.368: INFO: Pod "pod-38d9e5b1-49cd-453b-99a9-c0058b46a029": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007971546s Mar 13 14:06:42.371: INFO: Pod "pod-38d9e5b1-49cd-453b-99a9-c0058b46a029": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01069456s STEP: Saw pod success Mar 13 14:06:42.371: INFO: Pod "pod-38d9e5b1-49cd-453b-99a9-c0058b46a029" satisfied condition "success or failure" Mar 13 14:06:42.372: INFO: Trying to get logs from node iruya-worker pod pod-38d9e5b1-49cd-453b-99a9-c0058b46a029 container test-container: STEP: delete the pod Mar 13 14:06:42.415: INFO: Waiting for pod pod-38d9e5b1-49cd-453b-99a9-c0058b46a029 to disappear Mar 13 14:06:42.419: INFO: Pod pod-38d9e5b1-49cd-453b-99a9-c0058b46a029 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:06:42.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7301" for this suite. Mar 13 14:06:48.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:06:48.494: INFO: namespace emptydir-7301 deletion completed in 6.072662066s • [SLOW TEST:10.188 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:06:48.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 13 14:06:49.090: INFO: Pod name wrapped-volume-race-e91dc8c5-84dd-47f8-948b-c7b06cc1ae88: Found 0 pods out of 5 Mar 13 14:06:54.097: INFO: Pod name wrapped-volume-race-e91dc8c5-84dd-47f8-948b-c7b06cc1ae88: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e91dc8c5-84dd-47f8-948b-c7b06cc1ae88 in namespace emptydir-wrapper-9146, will wait for the garbage collector to delete the pods Mar 13 14:07:04.197: INFO: Deleting ReplicationController wrapped-volume-race-e91dc8c5-84dd-47f8-948b-c7b06cc1ae88 took: 16.845331ms Mar 13 14:07:04.497: INFO: Terminating ReplicationController wrapped-volume-race-e91dc8c5-84dd-47f8-948b-c7b06cc1ae88 pods took: 300.232244ms STEP: Creating RC which spawns configmap-volume pods Mar 13 14:07:44.437: INFO: Pod name wrapped-volume-race-3434d238-8b96-4cbb-98c8-89a4d5726cb9: Found 0 pods out of 5 Mar 13 14:07:49.444: INFO: Pod name wrapped-volume-race-3434d238-8b96-4cbb-98c8-89a4d5726cb9: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3434d238-8b96-4cbb-98c8-89a4d5726cb9 in namespace emptydir-wrapper-9146, will wait for the garbage collector to delete the pods Mar 13 14:08:01.526: INFO: Deleting ReplicationController wrapped-volume-race-3434d238-8b96-4cbb-98c8-89a4d5726cb9 took: 5.511967ms Mar 13 14:08:01.826: INFO: Terminating ReplicationController wrapped-volume-race-3434d238-8b96-4cbb-98c8-89a4d5726cb9 pods took: 300.217927ms STEP: Creating RC which spawns configmap-volume pods Mar 13 14:08:37.475: INFO: Pod name wrapped-volume-race-dc7ec4ba-8f39-48f0-96eb-c6104cc6edcb: Found 0 pods out of 5 Mar 13 14:08:42.481: INFO: Pod name wrapped-volume-race-dc7ec4ba-8f39-48f0-96eb-c6104cc6edcb: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-dc7ec4ba-8f39-48f0-96eb-c6104cc6edcb in namespace emptydir-wrapper-9146, will wait for the garbage collector to delete the pods Mar 13 14:08:52.555: INFO: Deleting ReplicationController wrapped-volume-race-dc7ec4ba-8f39-48f0-96eb-c6104cc6edcb took: 4.766277ms Mar 13 14:08:52.855: INFO: Terminating ReplicationController wrapped-volume-race-dc7ec4ba-8f39-48f0-96eb-c6104cc6edcb pods took: 300.253957ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:09:34.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9146" for this suite. Mar 13 14:09:42.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:09:43.032: INFO: namespace emptydir-wrapper-9146 deletion completed in 8.054933887s • [SLOW TEST:174.537 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:09:43.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 14:09:43.112: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d80731d-edb7-41b4-bf77-d8b1ea80ad5f" in namespace "downward-api-3943" to be "success or failure" Mar 13 14:09:43.124: INFO: Pod "downwardapi-volume-4d80731d-edb7-41b4-bf77-d8b1ea80ad5f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.265851ms Mar 13 14:09:45.128: INFO: Pod "downwardapi-volume-4d80731d-edb7-41b4-bf77-d8b1ea80ad5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015595168s STEP: Saw pod success Mar 13 14:09:45.128: INFO: Pod "downwardapi-volume-4d80731d-edb7-41b4-bf77-d8b1ea80ad5f" satisfied condition "success or failure" Mar 13 14:09:45.130: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-4d80731d-edb7-41b4-bf77-d8b1ea80ad5f container client-container: STEP: delete the pod Mar 13 14:09:45.159: INFO: Waiting for pod downwardapi-volume-4d80731d-edb7-41b4-bf77-d8b1ea80ad5f to disappear Mar 13 14:09:45.162: INFO: Pod downwardapi-volume-4d80731d-edb7-41b4-bf77-d8b1ea80ad5f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:09:45.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3943" for this suite. Mar 13 14:09:51.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:09:51.248: INFO: namespace downward-api-3943 deletion completed in 6.083432862s • [SLOW TEST:8.216 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:09:51.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 14:09:51.316: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ba76957-e280-4e6d-beab-e102aadd0a37" in namespace "downward-api-7393" to be "success or failure" Mar 13 14:09:51.336: INFO: Pod "downwardapi-volume-1ba76957-e280-4e6d-beab-e102aadd0a37": Phase="Pending", Reason="", readiness=false. Elapsed: 19.933378ms Mar 13 14:09:53.344: INFO: Pod "downwardapi-volume-1ba76957-e280-4e6d-beab-e102aadd0a37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027540283s STEP: Saw pod success Mar 13 14:09:53.344: INFO: Pod "downwardapi-volume-1ba76957-e280-4e6d-beab-e102aadd0a37" satisfied condition "success or failure" Mar 13 14:09:53.346: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1ba76957-e280-4e6d-beab-e102aadd0a37 container client-container: STEP: delete the pod Mar 13 14:09:53.362: INFO: Waiting for pod downwardapi-volume-1ba76957-e280-4e6d-beab-e102aadd0a37 to disappear Mar 13 14:09:53.398: INFO: Pod downwardapi-volume-1ba76957-e280-4e6d-beab-e102aadd0a37 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:09:53.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7393" for this suite. Mar 13 14:09:59.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:09:59.525: INFO: namespace downward-api-7393 deletion completed in 6.123913146s • [SLOW TEST:8.277 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:09:59.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-22bb3618-f035-4411-b7cf-df04793c3197 STEP: Creating a pod to test consume secrets Mar 13 14:09:59.577: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6b785353-4cb7-4127-ac09-54347343fbad" in namespace "projected-6149" to be "success or failure" Mar 13 14:09:59.608: INFO: Pod "pod-projected-secrets-6b785353-4cb7-4127-ac09-54347343fbad": Phase="Pending", Reason="", readiness=false. Elapsed: 30.55772ms Mar 13 14:10:01.612: INFO: Pod "pod-projected-secrets-6b785353-4cb7-4127-ac09-54347343fbad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.034504415s STEP: Saw pod success Mar 13 14:10:01.612: INFO: Pod "pod-projected-secrets-6b785353-4cb7-4127-ac09-54347343fbad" satisfied condition "success or failure" Mar 13 14:10:01.615: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-6b785353-4cb7-4127-ac09-54347343fbad container projected-secret-volume-test: STEP: delete the pod Mar 13 14:10:01.647: INFO: Waiting for pod pod-projected-secrets-6b785353-4cb7-4127-ac09-54347343fbad to disappear Mar 13 14:10:01.653: INFO: Pod pod-projected-secrets-6b785353-4cb7-4127-ac09-54347343fbad no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:10:01.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6149" for this suite. Mar 13 14:10:07.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:10:07.771: INFO: namespace projected-6149 deletion completed in 6.115313376s • [SLOW TEST:8.246 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:10:07.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Mar 13 14:10:11.853: INFO: Pod pod-hostip-92a83c4a-442e-4534-9808-bb86c184a983 has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:10:11.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1258" for this suite. Mar 13 14:10:33.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:10:33.952: INFO: namespace pods-1258 deletion completed in 22.095169928s • [SLOW TEST:26.180 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:10:33.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-90c55415-c2ab-47ca-a04f-bbcf6db65487 in namespace container-probe-5496 Mar 13 14:10:36.034: INFO: Started pod busybox-90c55415-c2ab-47ca-a04f-bbcf6db65487 in namespace container-probe-5496 STEP: checking the pod's current state and verifying that restartCount is present Mar 13 14:10:36.036: INFO: Initial restart count of pod busybox-90c55415-c2ab-47ca-a04f-bbcf6db65487 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:14:36.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5496" for this suite. Mar 13 14:14:43.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:14:43.073: INFO: namespace container-probe-5496 deletion completed in 6.354922435s • [SLOW TEST:249.121 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:14:43.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Mar 13 14:14:43.749: INFO: Waiting up to 5m0s for pod "pod-6080dcc3-2856-4c13-b78a-eca52bcf990d" in namespace "emptydir-1765" to be "success or failure" Mar 13 14:14:43.822: INFO: Pod "pod-6080dcc3-2856-4c13-b78a-eca52bcf990d": Phase="Pending", Reason="", readiness=false. Elapsed: 72.919746ms Mar 13 14:14:45.825: INFO: Pod "pod-6080dcc3-2856-4c13-b78a-eca52bcf990d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.076147951s STEP: Saw pod success Mar 13 14:14:45.825: INFO: Pod "pod-6080dcc3-2856-4c13-b78a-eca52bcf990d" satisfied condition "success or failure" Mar 13 14:14:45.827: INFO: Trying to get logs from node iruya-worker pod pod-6080dcc3-2856-4c13-b78a-eca52bcf990d container test-container: STEP: delete the pod Mar 13 14:14:45.862: INFO: Waiting for pod pod-6080dcc3-2856-4c13-b78a-eca52bcf990d to disappear Mar 13 14:14:45.867: INFO: Pod pod-6080dcc3-2856-4c13-b78a-eca52bcf990d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:14:45.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1765" for this suite. Mar 13 14:14:51.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:14:51.953: INFO: namespace emptydir-1765 deletion completed in 6.083489561s • [SLOW TEST:8.880 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:14:51.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 13 14:14:52.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8350' Mar 13 14:14:53.632: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 13 14:14:53.632: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Mar 13 14:14:53.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8350' Mar 13 14:14:53.780: INFO: stderr: "" Mar 13 14:14:53.780: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:14:53.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8350" for this suite. Mar 13 14:14:59.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:14:59.897: INFO: namespace kubectl-8350 deletion completed in 6.1136631s • [SLOW TEST:7.944 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:14:59.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 13 14:14:59.940: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 13 14:14:59.947: INFO: Waiting for terminating namespaces to be deleted... Mar 13 14:14:59.948: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 13 14:14:59.951: INFO: kindnet-9jdkr from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 13 14:14:59.951: INFO: Container kindnet-cni ready: true, restart count 0 Mar 13 14:14:59.951: INFO: kube-proxy-nf96r from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 13 14:14:59.951: INFO: Container kube-proxy ready: true, restart count 0 Mar 13 14:14:59.951: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 13 14:14:59.955: INFO: kube-proxy-clpmt from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 13 14:14:59.955: INFO: Container kube-proxy ready: true, restart count 0 Mar 13 14:14:59.955: INFO: kindnet-d7zdc from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 13 14:14:59.955: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Mar 13 14:15:00.027: INFO: Pod kindnet-9jdkr requesting resource cpu=100m on Node iruya-worker Mar 13 14:15:00.027: INFO: Pod kindnet-d7zdc requesting resource cpu=100m on Node iruya-worker2 Mar 13 14:15:00.027: INFO: Pod kube-proxy-clpmt requesting resource cpu=0m on Node iruya-worker2 Mar 13 14:15:00.027: INFO: Pod kube-proxy-nf96r requesting resource cpu=0m on Node iruya-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-2d1dc97a-a2b2-463d-9775-3c1a8ec4984f.15fbe2a6d020ed1d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-226/filler-pod-2d1dc97a-a2b2-463d-9775-3c1a8ec4984f to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-2d1dc97a-a2b2-463d-9775-3c1a8ec4984f.15fbe2a7005b6109], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-2d1dc97a-a2b2-463d-9775-3c1a8ec4984f.15fbe2a7100ed67e], Reason = [Created], Message = [Created container filler-pod-2d1dc97a-a2b2-463d-9775-3c1a8ec4984f] STEP: Considering event: Type = [Normal], Name = [filler-pod-2d1dc97a-a2b2-463d-9775-3c1a8ec4984f.15fbe2a71d82223f], Reason = [Started], Message = [Started container filler-pod-2d1dc97a-a2b2-463d-9775-3c1a8ec4984f] STEP: Considering event: Type = [Normal], Name = [filler-pod-c1956dd3-186b-46d6-8ab8-6b72405a154c.15fbe2a6d2855acb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-226/filler-pod-c1956dd3-186b-46d6-8ab8-6b72405a154c to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-c1956dd3-186b-46d6-8ab8-6b72405a154c.15fbe2a70068fb2b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c1956dd3-186b-46d6-8ab8-6b72405a154c.15fbe2a70adc8bc7], Reason = [Created], Message = [Created container filler-pod-c1956dd3-186b-46d6-8ab8-6b72405a154c] STEP: Considering event: Type = [Normal], Name = [filler-pod-c1956dd3-186b-46d6-8ab8-6b72405a154c.15fbe2a716b740ff], Reason = [Started], Message = [Started container filler-pod-c1956dd3-186b-46d6-8ab8-6b72405a154c] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fbe2a7c1f58360], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:15:05.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-226" for this suite. Mar 13 14:15:11.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:15:11.287: INFO: namespace sched-pred-226 deletion completed in 6.092659873s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.390 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:15:11.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 14:15:11.355: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:15:13.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6080" for this suite. Mar 13 14:15:51.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:15:51.471: INFO: namespace pods-6080 deletion completed in 38.080108196s • [SLOW TEST:40.184 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:15:51.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9392 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9392 STEP: Creating statefulset with conflicting port in namespace statefulset-9392 STEP: Waiting until pod test-pod will start running in namespace statefulset-9392 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9392 Mar 13 14:15:53.561: INFO: Observed stateful pod in namespace: statefulset-9392, name: ss-0, uid: 32cf4c40-d16f-455a-a096-0d3e60284683, status phase: Pending. Waiting for statefulset controller to delete. Mar 13 14:15:54.282: INFO: Observed stateful pod in namespace: statefulset-9392, name: ss-0, uid: 32cf4c40-d16f-455a-a096-0d3e60284683, status phase: Failed. Waiting for statefulset controller to delete. Mar 13 14:15:54.305: INFO: Observed stateful pod in namespace: statefulset-9392, name: ss-0, uid: 32cf4c40-d16f-455a-a096-0d3e60284683, status phase: Failed. Waiting for statefulset controller to delete. Mar 13 14:15:54.321: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9392 STEP: Removing pod with conflicting port in namespace statefulset-9392 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9392 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 13 14:16:04.387: INFO: Deleting all statefulset in ns statefulset-9392 Mar 13 14:16:04.389: INFO: Scaling statefulset ss to 0 Mar 13 14:16:14.404: INFO: Waiting for statefulset status.replicas updated to 0 Mar 13 14:16:14.405: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:16:14.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9392" for this suite. Mar 13 14:16:20.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:16:20.508: INFO: namespace statefulset-9392 deletion completed in 6.091406463s • [SLOW TEST:29.036 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:16:20.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 14:16:20.559: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c06b6612-6064-4ef6-b6e4-e11488b7feb1" in namespace "downward-api-6765" to be "success or failure" Mar 13 14:16:20.575: INFO: Pod "downwardapi-volume-c06b6612-6064-4ef6-b6e4-e11488b7feb1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.923021ms Mar 13 14:16:22.579: INFO: Pod "downwardapi-volume-c06b6612-6064-4ef6-b6e4-e11488b7feb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01962291s STEP: Saw pod success Mar 13 14:16:22.579: INFO: Pod "downwardapi-volume-c06b6612-6064-4ef6-b6e4-e11488b7feb1" satisfied condition "success or failure" Mar 13 14:16:22.581: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c06b6612-6064-4ef6-b6e4-e11488b7feb1 container client-container: STEP: delete the pod Mar 13 14:16:22.638: INFO: Waiting for pod downwardapi-volume-c06b6612-6064-4ef6-b6e4-e11488b7feb1 to disappear Mar 13 14:16:22.641: INFO: Pod downwardapi-volume-c06b6612-6064-4ef6-b6e4-e11488b7feb1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:16:22.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6765" for this suite. Mar 13 14:16:28.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:16:28.731: INFO: namespace downward-api-6765 deletion completed in 6.085458376s • [SLOW TEST:8.223 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:16:28.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 14:16:28.832: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 13 14:16:33.837: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 13 14:16:33.837: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 13 14:16:35.844: INFO: Creating deployment "test-rollover-deployment" Mar 13 14:16:35.851: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 13 14:16:37.858: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 13 14:16:37.865: INFO: Ensure that both replica sets have 1 created replica Mar 13 14:16:37.871: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 13 14:16:37.877: INFO: Updating deployment test-rollover-deployment Mar 13 14:16:37.877: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 13 14:16:39.884: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 13 14:16:39.889: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 13 14:16:39.893: INFO: all replica sets need to contain the pod-template-hash label Mar 13 14:16:39.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705799, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 13 14:16:41.900: INFO: all replica sets need to contain the pod-template-hash label Mar 13 14:16:41.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705799, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 13 14:16:43.900: INFO: all replica sets need to contain the pod-template-hash label Mar 13 14:16:43.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705799, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 13 14:16:45.916: INFO: all replica sets need to contain the pod-template-hash label Mar 13 14:16:45.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705799, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 13 14:16:47.900: INFO: all replica sets need to contain the pod-template-hash label Mar 13 14:16:47.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705799, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719705795, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 13 14:16:49.900: INFO: Mar 13 14:16:49.900: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 13 14:16:49.907: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-9671,SelfLink:/apis/apps/v1/namespaces/deployment-9671/deployments/test-rollover-deployment,UID:cb66c2b3-19d2-4363-b29d-a1474ae2a3eb,ResourceVersion:916878,Generation:2,CreationTimestamp:2020-03-13 14:16:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-13 14:16:35 +0000 UTC 2020-03-13 14:16:35 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-13 14:16:49 +0000 UTC 2020-03-13 14:16:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 13 14:16:49.911: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-9671,SelfLink:/apis/apps/v1/namespaces/deployment-9671/replicasets/test-rollover-deployment-854595fc44,UID:0a09a7be-91fe-4f12-9567-18916c50a4e3,ResourceVersion:916867,Generation:2,CreationTimestamp:2020-03-13 14:16:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment cb66c2b3-19d2-4363-b29d-a1474ae2a3eb 0xc002ee8aa7 0xc002ee8aa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 13 14:16:49.911: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 13 14:16:49.911: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-9671,SelfLink:/apis/apps/v1/namespaces/deployment-9671/replicasets/test-rollover-controller,UID:d4e4912e-2975-47c5-b03e-7717a6fb8ab3,ResourceVersion:916876,Generation:2,CreationTimestamp:2020-03-13 14:16:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment cb66c2b3-19d2-4363-b29d-a1474ae2a3eb 0xc002ee89d7 0xc002ee89d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 13 14:16:49.911: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-9671,SelfLink:/apis/apps/v1/namespaces/deployment-9671/replicasets/test-rollover-deployment-9b8b997cf,UID:2db6dcaa-543a-4a37-b591-643117c8c35f,ResourceVersion:916831,Generation:2,CreationTimestamp:2020-03-13 14:16:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment cb66c2b3-19d2-4363-b29d-a1474ae2a3eb 0xc002ee8b70 0xc002ee8b71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 13 14:16:49.914: INFO: Pod "test-rollover-deployment-854595fc44-dch7r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-dch7r,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-9671,SelfLink:/api/v1/namespaces/deployment-9671/pods/test-rollover-deployment-854595fc44-dch7r,UID:ebc72fce-27d1-4cdc-af87-d6c01053b17c,ResourceVersion:916845,Generation:0,CreationTimestamp:2020-03-13 14:16:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 0a09a7be-91fe-4f12-9567-18916c50a4e3 0xc002ebe647 0xc002ebe648}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-b42ml {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-b42ml,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-b42ml true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebe6c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebe6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:16:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:16:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:16:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:16:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.1.57,StartTime:2020-03-13 14:16:37 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-13 14:16:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://3232944ea4a2edce67ace80bf8cb6d92b842bbd4c04597a81ff7852dafb30e59}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:16:49.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9671" for this suite. Mar 13 14:16:55.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:16:55.979: INFO: namespace deployment-9671 deletion completed in 6.06263322s • [SLOW TEST:27.248 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:16:55.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 13 14:16:56.033: INFO: Waiting up to 5m0s for pod "pod-17784bc3-1fe7-4874-8e48-c7ca518cc9b9" in namespace "emptydir-7511" to be "success or failure" Mar 13 14:16:56.037: INFO: Pod "pod-17784bc3-1fe7-4874-8e48-c7ca518cc9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218873ms Mar 13 14:16:58.041: INFO: Pod "pod-17784bc3-1fe7-4874-8e48-c7ca518cc9b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007835491s STEP: Saw pod success Mar 13 14:16:58.041: INFO: Pod "pod-17784bc3-1fe7-4874-8e48-c7ca518cc9b9" satisfied condition "success or failure" Mar 13 14:16:58.044: INFO: Trying to get logs from node iruya-worker2 pod pod-17784bc3-1fe7-4874-8e48-c7ca518cc9b9 container test-container: STEP: delete the pod Mar 13 14:16:58.063: INFO: Waiting for pod pod-17784bc3-1fe7-4874-8e48-c7ca518cc9b9 to disappear Mar 13 14:16:58.067: INFO: Pod pod-17784bc3-1fe7-4874-8e48-c7ca518cc9b9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:16:58.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7511" for this suite. Mar 13 14:17:04.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:17:04.169: INFO: namespace emptydir-7511 deletion completed in 6.09917912s • [SLOW TEST:8.189 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:17:04.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-8003/secret-test-580cee49-dfca-4e96-ada6-8ece7e2a117a STEP: Creating a pod to test consume secrets Mar 13 14:17:04.228: INFO: Waiting up to 5m0s for pod "pod-configmaps-e6c78276-9864-47ce-8faa-8f0155222c0b" in namespace "secrets-8003" to be "success or failure" Mar 13 14:17:04.234: INFO: Pod "pod-configmaps-e6c78276-9864-47ce-8faa-8f0155222c0b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.724452ms Mar 13 14:17:06.237: INFO: Pod "pod-configmaps-e6c78276-9864-47ce-8faa-8f0155222c0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008752097s STEP: Saw pod success Mar 13 14:17:06.237: INFO: Pod "pod-configmaps-e6c78276-9864-47ce-8faa-8f0155222c0b" satisfied condition "success or failure" Mar 13 14:17:06.239: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-e6c78276-9864-47ce-8faa-8f0155222c0b container env-test: STEP: delete the pod Mar 13 14:17:06.254: INFO: Waiting for pod pod-configmaps-e6c78276-9864-47ce-8faa-8f0155222c0b to disappear Mar 13 14:17:06.258: INFO: Pod pod-configmaps-e6c78276-9864-47ce-8faa-8f0155222c0b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:17:06.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8003" for this suite. Mar 13 14:17:12.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:17:12.368: INFO: namespace secrets-8003 deletion completed in 6.107253219s • [SLOW TEST:8.199 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:17:12.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 13 14:17:12.465: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-9980,SelfLink:/api/v1/namespaces/watch-9980/configmaps/e2e-watch-test-resource-version,UID:9e30f95c-e149-49ad-ae12-b30a9adacc76,ResourceVersion:917010,Generation:0,CreationTimestamp:2020-03-13 14:17:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 13 14:17:12.465: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-9980,SelfLink:/api/v1/namespaces/watch-9980/configmaps/e2e-watch-test-resource-version,UID:9e30f95c-e149-49ad-ae12-b30a9adacc76,ResourceVersion:917011,Generation:0,CreationTimestamp:2020-03-13 14:17:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:17:12.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9980" for this suite. Mar 13 14:17:18.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:17:18.555: INFO: namespace watch-9980 deletion completed in 6.07839416s • [SLOW TEST:6.187 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:17:18.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-f1f08041-4e2b-41f5-83da-7a81222c7232 in namespace container-probe-581 Mar 13 14:17:20.622: INFO: Started pod liveness-f1f08041-4e2b-41f5-83da-7a81222c7232 in namespace container-probe-581 STEP: checking the pod's current state and verifying that restartCount is present Mar 13 14:17:20.624: INFO: Initial restart count of pod liveness-f1f08041-4e2b-41f5-83da-7a81222c7232 is 0 Mar 13 14:17:44.666: INFO: Restart count of pod container-probe-581/liveness-f1f08041-4e2b-41f5-83da-7a81222c7232 is now 1 (24.041613068s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:17:44.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-581" for this suite. Mar 13 14:17:50.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:17:50.781: INFO: namespace container-probe-581 deletion completed in 6.087973536s • [SLOW TEST:32.226 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:17:50.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 13 14:17:53.354: INFO: Successfully updated pod "pod-update-2dd2f553-5280-4022-b2da-83295695b494" STEP: verifying the updated pod is in kubernetes Mar 13 14:17:53.363: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:17:53.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5675" for this suite. Mar 13 14:18:15.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:18:15.437: INFO: namespace pods-5675 deletion completed in 22.071338867s • [SLOW TEST:24.656 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:18:15.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-11cba995-b578-44d4-bff7-bb9b44809cc5 STEP: Creating a pod to test consume secrets Mar 13 14:18:15.508: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-907880d9-24da-45ec-ac22-dbd66cff4d90" in namespace "projected-3157" to be "success or failure" Mar 13 14:18:15.512: INFO: Pod "pod-projected-secrets-907880d9-24da-45ec-ac22-dbd66cff4d90": Phase="Pending", Reason="", readiness=false. Elapsed: 3.948301ms Mar 13 14:18:17.516: INFO: Pod "pod-projected-secrets-907880d9-24da-45ec-ac22-dbd66cff4d90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007818245s Mar 13 14:18:19.519: INFO: Pod "pod-projected-secrets-907880d9-24da-45ec-ac22-dbd66cff4d90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010826779s STEP: Saw pod success Mar 13 14:18:19.519: INFO: Pod "pod-projected-secrets-907880d9-24da-45ec-ac22-dbd66cff4d90" satisfied condition "success or failure" Mar 13 14:18:19.521: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-907880d9-24da-45ec-ac22-dbd66cff4d90 container projected-secret-volume-test: STEP: delete the pod Mar 13 14:18:19.538: INFO: Waiting for pod pod-projected-secrets-907880d9-24da-45ec-ac22-dbd66cff4d90 to disappear Mar 13 14:18:19.542: INFO: Pod pod-projected-secrets-907880d9-24da-45ec-ac22-dbd66cff4d90 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:18:19.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3157" for this suite. Mar 13 14:18:25.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:18:25.660: INFO: namespace projected-3157 deletion completed in 6.115744989s • [SLOW TEST:10.223 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:18:25.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-2c865f6c-4cab-44fa-8a3a-9f087effdd3f STEP: Creating secret with name secret-projected-all-test-volume-9e382cb9-931e-46d3-98c9-c604d6b25b5f STEP: Creating a pod to test Check all projections for projected volume plugin Mar 13 14:18:25.736: INFO: Waiting up to 5m0s for pod "projected-volume-8bc25a20-4d9a-43dc-b9ba-132dfe4f3b6d" in namespace "projected-7086" to be "success or failure" Mar 13 14:18:25.752: INFO: Pod "projected-volume-8bc25a20-4d9a-43dc-b9ba-132dfe4f3b6d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.172354ms Mar 13 14:18:27.756: INFO: Pod "projected-volume-8bc25a20-4d9a-43dc-b9ba-132dfe4f3b6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019717637s STEP: Saw pod success Mar 13 14:18:27.756: INFO: Pod "projected-volume-8bc25a20-4d9a-43dc-b9ba-132dfe4f3b6d" satisfied condition "success or failure" Mar 13 14:18:27.758: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-8bc25a20-4d9a-43dc-b9ba-132dfe4f3b6d container projected-all-volume-test: STEP: delete the pod Mar 13 14:18:27.802: INFO: Waiting for pod projected-volume-8bc25a20-4d9a-43dc-b9ba-132dfe4f3b6d to disappear Mar 13 14:18:27.806: INFO: Pod projected-volume-8bc25a20-4d9a-43dc-b9ba-132dfe4f3b6d no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:18:27.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7086" for this suite. Mar 13 14:18:33.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:18:33.906: INFO: namespace projected-7086 deletion completed in 6.096149622s • [SLOW TEST:8.246 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:18:33.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-84e7ea47-aef6-4cd9-9375-6f9843e4b6ef [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:18:33.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3960" for this suite. Mar 13 14:18:39.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:18:40.083: INFO: namespace secrets-3960 deletion completed in 6.09624125s • [SLOW TEST:6.177 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:18:40.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-fc15ab8a-a447-4b9d-a0f1-bc7005d231e9 Mar 13 14:18:40.182: INFO: Pod name my-hostname-basic-fc15ab8a-a447-4b9d-a0f1-bc7005d231e9: Found 0 pods out of 1 Mar 13 14:18:45.187: INFO: Pod name my-hostname-basic-fc15ab8a-a447-4b9d-a0f1-bc7005d231e9: Found 1 pods out of 1 Mar 13 14:18:45.187: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-fc15ab8a-a447-4b9d-a0f1-bc7005d231e9" are running Mar 13 14:18:45.190: INFO: Pod "my-hostname-basic-fc15ab8a-a447-4b9d-a0f1-bc7005d231e9-h5srm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 14:18:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 14:18:41 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 14:18:41 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 14:18:40 +0000 UTC Reason: Message:}]) Mar 13 14:18:45.190: INFO: Trying to dial the pod Mar 13 14:18:50.199: INFO: Controller my-hostname-basic-fc15ab8a-a447-4b9d-a0f1-bc7005d231e9: Got expected result from replica 1 [my-hostname-basic-fc15ab8a-a447-4b9d-a0f1-bc7005d231e9-h5srm]: "my-hostname-basic-fc15ab8a-a447-4b9d-a0f1-bc7005d231e9-h5srm", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:18:50.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6104" for this suite. Mar 13 14:18:56.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:18:56.295: INFO: namespace replication-controller-6104 deletion completed in 6.092660964s • [SLOW TEST:16.212 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:18:56.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:19:19.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5676" for this suite. Mar 13 14:19:25.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:19:25.800: INFO: namespace container-runtime-5676 deletion completed in 6.085144821s • [SLOW TEST:29.505 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:19:25.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:19:29.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3512" for this suite. Mar 13 14:19:35.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:19:36.018: INFO: namespace kubelet-test-3512 deletion completed in 6.13038737s • [SLOW TEST:10.218 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:19:36.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-43 I0313 14:19:36.059355 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-43, replica count: 1 I0313 14:19:37.109801 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0313 14:19:38.110011 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 13 14:19:38.238: INFO: Created: latency-svc-qlw65 Mar 13 14:19:38.242: INFO: Got endpoints: latency-svc-qlw65 [32.203558ms] Mar 13 14:19:38.292: INFO: Created: latency-svc-cl7w5 Mar 13 14:19:38.314: INFO: Got endpoints: latency-svc-cl7w5 [71.817859ms] Mar 13 14:19:38.314: INFO: Created: latency-svc-kvzbd Mar 13 14:19:38.322: INFO: Got endpoints: latency-svc-kvzbd [80.274866ms] Mar 13 14:19:38.345: INFO: Created: latency-svc-tjw25 Mar 13 14:19:38.347: INFO: Got endpoints: latency-svc-tjw25 [105.061524ms] Mar 13 14:19:38.371: INFO: Created: latency-svc-scl95 Mar 13 14:19:38.375: INFO: Got endpoints: latency-svc-scl95 [132.886301ms] Mar 13 14:19:38.418: INFO: Created: latency-svc-cj6d2 Mar 13 14:19:38.420: INFO: Got endpoints: latency-svc-cj6d2 [178.50934ms] Mar 13 14:19:38.450: INFO: Created: latency-svc-sdd8n Mar 13 14:19:38.452: INFO: Got endpoints: latency-svc-sdd8n [210.00974ms] Mar 13 14:19:38.476: INFO: Created: latency-svc-vztmz Mar 13 14:19:38.480: INFO: Got endpoints: latency-svc-vztmz [237.908004ms] Mar 13 14:19:38.500: INFO: Created: latency-svc-sfn57 Mar 13 14:19:38.504: INFO: Got endpoints: latency-svc-sfn57 [261.863298ms] Mar 13 14:19:38.544: INFO: Created: latency-svc-55gl7 Mar 13 14:19:38.546: INFO: Got endpoints: latency-svc-55gl7 [303.926897ms] Mar 13 14:19:38.573: INFO: Created: latency-svc-72lqh Mar 13 14:19:38.582: INFO: Got endpoints: latency-svc-72lqh [339.246647ms] Mar 13 14:19:38.603: INFO: Created: latency-svc-whkjh Mar 13 14:19:38.611: INFO: Got endpoints: latency-svc-whkjh [368.727874ms] Mar 13 14:19:38.628: INFO: Created: latency-svc-6tqw6 Mar 13 14:19:38.635: INFO: Got endpoints: latency-svc-6tqw6 [393.045336ms] Mar 13 14:19:38.687: INFO: Created: latency-svc-5rkqk Mar 13 14:19:38.710: INFO: Created: latency-svc-m9vfh Mar 13 14:19:38.710: INFO: Got endpoints: latency-svc-5rkqk [467.551091ms] Mar 13 14:19:38.714: INFO: Got endpoints: latency-svc-m9vfh [472.18648ms] Mar 13 14:19:38.734: INFO: Created: latency-svc-4nmz2 Mar 13 14:19:38.746: INFO: Got endpoints: latency-svc-4nmz2 [504.068906ms] Mar 13 14:19:38.765: INFO: Created: latency-svc-bvbfw Mar 13 14:19:38.769: INFO: Got endpoints: latency-svc-bvbfw [454.877928ms] Mar 13 14:19:38.832: INFO: Created: latency-svc-s2lgb Mar 13 14:19:38.856: INFO: Created: latency-svc-ns656 Mar 13 14:19:38.856: INFO: Got endpoints: latency-svc-s2lgb [533.489701ms] Mar 13 14:19:38.860: INFO: Got endpoints: latency-svc-ns656 [512.80316ms] Mar 13 14:19:38.885: INFO: Created: latency-svc-hbx88 Mar 13 14:19:38.890: INFO: Got endpoints: latency-svc-hbx88 [515.060345ms] Mar 13 14:19:38.981: INFO: Created: latency-svc-fv5zs Mar 13 14:19:38.999: INFO: Got endpoints: latency-svc-fv5zs [578.746519ms] Mar 13 14:19:39.023: INFO: Created: latency-svc-r2szg Mar 13 14:19:39.029: INFO: Got endpoints: latency-svc-r2szg [576.600872ms] Mar 13 14:19:39.070: INFO: Created: latency-svc-h8tws Mar 13 14:19:39.077: INFO: Got endpoints: latency-svc-h8tws [596.471006ms] Mar 13 14:19:39.150: INFO: Created: latency-svc-2xbxh Mar 13 14:19:39.198: INFO: Got endpoints: latency-svc-2xbxh [693.763918ms] Mar 13 14:19:39.199: INFO: Created: latency-svc-jz9ml Mar 13 14:19:39.203: INFO: Got endpoints: latency-svc-jz9ml [656.977363ms] Mar 13 14:19:39.227: INFO: Created: latency-svc-bxtzw Mar 13 14:19:39.233: INFO: Got endpoints: latency-svc-bxtzw [651.814998ms] Mar 13 14:19:39.292: INFO: Created: latency-svc-rjs78 Mar 13 14:19:39.317: INFO: Created: latency-svc-crrv9 Mar 13 14:19:39.317: INFO: Got endpoints: latency-svc-rjs78 [706.241551ms] Mar 13 14:19:39.324: INFO: Got endpoints: latency-svc-crrv9 [688.907714ms] Mar 13 14:19:39.353: INFO: Created: latency-svc-v4jtd Mar 13 14:19:39.360: INFO: Got endpoints: latency-svc-v4jtd [650.631785ms] Mar 13 14:19:39.378: INFO: Created: latency-svc-pcw25 Mar 13 14:19:39.385: INFO: Got endpoints: latency-svc-pcw25 [670.621449ms] Mar 13 14:19:39.430: INFO: Created: latency-svc-n5z7s Mar 13 14:19:39.433: INFO: Got endpoints: latency-svc-n5z7s [686.958965ms] Mar 13 14:19:39.460: INFO: Created: latency-svc-tm5s8 Mar 13 14:19:39.473: INFO: Got endpoints: latency-svc-tm5s8 [703.665953ms] Mar 13 14:19:39.493: INFO: Created: latency-svc-mtdgz Mar 13 14:19:39.503: INFO: Got endpoints: latency-svc-mtdgz [647.209072ms] Mar 13 14:19:39.528: INFO: Created: latency-svc-tpkbv Mar 13 14:19:39.579: INFO: Got endpoints: latency-svc-tpkbv [719.254353ms] Mar 13 14:19:39.582: INFO: Created: latency-svc-ccxpc Mar 13 14:19:39.592: INFO: Got endpoints: latency-svc-ccxpc [701.559988ms] Mar 13 14:19:39.629: INFO: Created: latency-svc-4zh4k Mar 13 14:19:39.645: INFO: Got endpoints: latency-svc-4zh4k [645.171066ms] Mar 13 14:19:39.672: INFO: Created: latency-svc-6mv95 Mar 13 14:19:39.675: INFO: Got endpoints: latency-svc-6mv95 [646.253666ms] Mar 13 14:19:39.735: INFO: Created: latency-svc-gnhlf Mar 13 14:19:39.738: INFO: Got endpoints: latency-svc-gnhlf [661.159325ms] Mar 13 14:19:39.778: INFO: Created: latency-svc-krfj7 Mar 13 14:19:39.790: INFO: Got endpoints: latency-svc-krfj7 [592.271916ms] Mar 13 14:19:39.809: INFO: Created: latency-svc-vm5cz Mar 13 14:19:39.814: INFO: Got endpoints: latency-svc-vm5cz [610.594206ms] Mar 13 14:19:39.835: INFO: Created: latency-svc-77ptd Mar 13 14:19:39.886: INFO: Got endpoints: latency-svc-77ptd [652.103977ms] Mar 13 14:19:39.894: INFO: Created: latency-svc-r9rsf Mar 13 14:19:39.899: INFO: Got endpoints: latency-svc-r9rsf [581.22291ms] Mar 13 14:19:39.916: INFO: Created: latency-svc-l6qh7 Mar 13 14:19:39.923: INFO: Got endpoints: latency-svc-l6qh7 [598.22716ms] Mar 13 14:19:39.948: INFO: Created: latency-svc-fqrcg Mar 13 14:19:39.952: INFO: Got endpoints: latency-svc-fqrcg [591.844448ms] Mar 13 14:19:40.025: INFO: Created: latency-svc-f94w2 Mar 13 14:19:40.025: INFO: Got endpoints: latency-svc-f94w2 [639.723653ms] Mar 13 14:19:40.063: INFO: Created: latency-svc-4lr2z Mar 13 14:19:40.074: INFO: Got endpoints: latency-svc-4lr2z [640.574839ms] Mar 13 14:19:40.114: INFO: Created: latency-svc-rtrgl Mar 13 14:19:40.122: INFO: Got endpoints: latency-svc-rtrgl [648.958345ms] Mar 13 14:19:40.197: INFO: Created: latency-svc-cb8rl Mar 13 14:19:40.208: INFO: Got endpoints: latency-svc-cb8rl [704.747386ms] Mar 13 14:19:40.231: INFO: Created: latency-svc-rzjgs Mar 13 14:19:40.238: INFO: Got endpoints: latency-svc-rzjgs [658.158687ms] Mar 13 14:19:40.253: INFO: Created: latency-svc-d5nd4 Mar 13 14:19:40.261: INFO: Got endpoints: latency-svc-d5nd4 [668.799679ms] Mar 13 14:19:40.290: INFO: Created: latency-svc-z5z6x Mar 13 14:19:40.292: INFO: Got endpoints: latency-svc-z5z6x [647.630343ms] Mar 13 14:19:40.347: INFO: Created: latency-svc-dl5df Mar 13 14:19:40.352: INFO: Got endpoints: latency-svc-dl5df [676.778749ms] Mar 13 14:19:40.374: INFO: Created: latency-svc-pl2mm Mar 13 14:19:40.381: INFO: Got endpoints: latency-svc-pl2mm [643.382166ms] Mar 13 14:19:40.398: INFO: Created: latency-svc-scfmm Mar 13 14:19:40.406: INFO: Got endpoints: latency-svc-scfmm [615.903833ms] Mar 13 14:19:40.427: INFO: Created: latency-svc-8hmv2 Mar 13 14:19:40.430: INFO: Got endpoints: latency-svc-8hmv2 [615.919409ms] Mar 13 14:19:40.484: INFO: Created: latency-svc-fn798 Mar 13 14:19:40.486: INFO: Got endpoints: latency-svc-fn798 [600.322673ms] Mar 13 14:19:40.512: INFO: Created: latency-svc-khs6j Mar 13 14:19:40.535: INFO: Got endpoints: latency-svc-khs6j [636.846849ms] Mar 13 14:19:40.560: INFO: Created: latency-svc-q82rx Mar 13 14:19:40.569: INFO: Got endpoints: latency-svc-q82rx [646.294559ms] Mar 13 14:19:40.627: INFO: Created: latency-svc-zbqxl Mar 13 14:19:40.661: INFO: Created: latency-svc-hnv6q Mar 13 14:19:40.661: INFO: Got endpoints: latency-svc-zbqxl [708.116893ms] Mar 13 14:19:40.665: INFO: Got endpoints: latency-svc-hnv6q [640.320985ms] Mar 13 14:19:40.685: INFO: Created: latency-svc-pjj5g Mar 13 14:19:40.693: INFO: Got endpoints: latency-svc-pjj5g [619.119792ms] Mar 13 14:19:40.711: INFO: Created: latency-svc-brc72 Mar 13 14:19:40.714: INFO: Got endpoints: latency-svc-brc72 [592.250352ms] Mar 13 14:19:40.783: INFO: Created: latency-svc-h6wmh Mar 13 14:19:40.785: INFO: Got endpoints: latency-svc-h6wmh [92.247103ms] Mar 13 14:19:40.811: INFO: Created: latency-svc-64qqs Mar 13 14:19:40.817: INFO: Got endpoints: latency-svc-64qqs [608.823822ms] Mar 13 14:19:40.840: INFO: Created: latency-svc-msdt2 Mar 13 14:19:40.845: INFO: Got endpoints: latency-svc-msdt2 [606.975641ms] Mar 13 14:19:40.876: INFO: Created: latency-svc-9qcjj Mar 13 14:19:40.881: INFO: Got endpoints: latency-svc-9qcjj [620.133147ms] Mar 13 14:19:40.922: INFO: Created: latency-svc-m5k65 Mar 13 14:19:40.925: INFO: Got endpoints: latency-svc-m5k65 [633.103549ms] Mar 13 14:19:40.948: INFO: Created: latency-svc-7fs4m Mar 13 14:19:40.956: INFO: Got endpoints: latency-svc-7fs4m [604.307723ms] Mar 13 14:19:40.979: INFO: Created: latency-svc-w5vh8 Mar 13 14:19:41.003: INFO: Got endpoints: latency-svc-w5vh8 [621.666757ms] Mar 13 14:19:41.004: INFO: Created: latency-svc-wc9xq Mar 13 14:19:41.058: INFO: Got endpoints: latency-svc-wc9xq [652.193735ms] Mar 13 14:19:41.070: INFO: Created: latency-svc-cwpfr Mar 13 14:19:41.077: INFO: Got endpoints: latency-svc-cwpfr [646.871192ms] Mar 13 14:19:41.099: INFO: Created: latency-svc-5l95g Mar 13 14:19:41.108: INFO: Got endpoints: latency-svc-5l95g [621.672602ms] Mar 13 14:19:41.129: INFO: Created: latency-svc-fbp4k Mar 13 14:19:41.137: INFO: Got endpoints: latency-svc-fbp4k [601.714746ms] Mar 13 14:19:41.196: INFO: Created: latency-svc-fhdgz Mar 13 14:19:41.198: INFO: Got endpoints: latency-svc-fhdgz [629.337634ms] Mar 13 14:19:41.226: INFO: Created: latency-svc-8jpwm Mar 13 14:19:41.244: INFO: Got endpoints: latency-svc-8jpwm [583.346945ms] Mar 13 14:19:41.268: INFO: Created: latency-svc-k4529 Mar 13 14:19:41.276: INFO: Got endpoints: latency-svc-k4529 [610.906576ms] Mar 13 14:19:41.334: INFO: Created: latency-svc-sg7tr Mar 13 14:19:41.336: INFO: Got endpoints: latency-svc-sg7tr [622.400099ms] Mar 13 14:19:41.364: INFO: Created: latency-svc-qdr65 Mar 13 14:19:41.373: INFO: Got endpoints: latency-svc-qdr65 [587.931975ms] Mar 13 14:19:41.394: INFO: Created: latency-svc-hhrds Mar 13 14:19:41.403: INFO: Got endpoints: latency-svc-hhrds [586.05862ms] Mar 13 14:19:41.424: INFO: Created: latency-svc-4dlk6 Mar 13 14:19:41.428: INFO: Got endpoints: latency-svc-4dlk6 [583.149207ms] Mar 13 14:19:41.472: INFO: Created: latency-svc-r2fh6 Mar 13 14:19:41.483: INFO: Got endpoints: latency-svc-r2fh6 [601.783869ms] Mar 13 14:19:41.502: INFO: Created: latency-svc-5hv5l Mar 13 14:19:41.507: INFO: Got endpoints: latency-svc-5hv5l [581.168617ms] Mar 13 14:19:41.525: INFO: Created: latency-svc-x7wbb Mar 13 14:19:41.531: INFO: Got endpoints: latency-svc-x7wbb [575.165852ms] Mar 13 14:19:41.550: INFO: Created: latency-svc-g5tn7 Mar 13 14:19:41.615: INFO: Got endpoints: latency-svc-g5tn7 [612.36008ms] Mar 13 14:19:41.617: INFO: Created: latency-svc-4kbvc Mar 13 14:19:41.621: INFO: Got endpoints: latency-svc-4kbvc [562.166586ms] Mar 13 14:19:41.651: INFO: Created: latency-svc-rv7cl Mar 13 14:19:41.657: INFO: Got endpoints: latency-svc-rv7cl [580.5849ms] Mar 13 14:19:41.675: INFO: Created: latency-svc-ghg5v Mar 13 14:19:41.694: INFO: Got endpoints: latency-svc-ghg5v [586.248723ms] Mar 13 14:19:41.694: INFO: Created: latency-svc-7rv8k Mar 13 14:19:41.712: INFO: Got endpoints: latency-svc-7rv8k [574.837822ms] Mar 13 14:19:41.789: INFO: Created: latency-svc-lhnmr Mar 13 14:19:41.792: INFO: Got endpoints: latency-svc-lhnmr [593.977905ms] Mar 13 14:19:41.825: INFO: Created: latency-svc-fn547 Mar 13 14:19:41.839: INFO: Got endpoints: latency-svc-fn547 [594.789687ms] Mar 13 14:19:41.862: INFO: Created: latency-svc-g9rvf Mar 13 14:19:41.868: INFO: Got endpoints: latency-svc-g9rvf [592.123471ms] Mar 13 14:19:41.886: INFO: Created: latency-svc-mwjwg Mar 13 14:19:41.951: INFO: Got endpoints: latency-svc-mwjwg [614.22572ms] Mar 13 14:19:41.953: INFO: Created: latency-svc-64zkx Mar 13 14:19:41.959: INFO: Got endpoints: latency-svc-64zkx [585.705104ms] Mar 13 14:19:41.981: INFO: Created: latency-svc-tcdkb Mar 13 14:19:42.007: INFO: Got endpoints: latency-svc-tcdkb [604.327263ms] Mar 13 14:19:42.009: INFO: Created: latency-svc-x4zjh Mar 13 14:19:42.014: INFO: Got endpoints: latency-svc-x4zjh [586.241654ms] Mar 13 14:19:42.107: INFO: Created: latency-svc-bv42v Mar 13 14:19:42.138: INFO: Got endpoints: latency-svc-bv42v [654.841816ms] Mar 13 14:19:42.139: INFO: Created: latency-svc-26q4t Mar 13 14:19:42.147: INFO: Got endpoints: latency-svc-26q4t [640.110197ms] Mar 13 14:19:42.181: INFO: Created: latency-svc-kw7jv Mar 13 14:19:42.189: INFO: Got endpoints: latency-svc-kw7jv [657.37703ms] Mar 13 14:19:42.256: INFO: Created: latency-svc-km2wh Mar 13 14:19:42.259: INFO: Got endpoints: latency-svc-km2wh [643.130116ms] Mar 13 14:19:42.282: INFO: Created: latency-svc-7kt5l Mar 13 14:19:42.295: INFO: Got endpoints: latency-svc-7kt5l [674.09096ms] Mar 13 14:19:42.313: INFO: Created: latency-svc-fbh85 Mar 13 14:19:42.316: INFO: Got endpoints: latency-svc-fbh85 [658.279847ms] Mar 13 14:19:42.406: INFO: Created: latency-svc-j8gbp Mar 13 14:19:42.431: INFO: Got endpoints: latency-svc-j8gbp [737.476481ms] Mar 13 14:19:42.432: INFO: Created: latency-svc-lrg2g Mar 13 14:19:42.436: INFO: Got endpoints: latency-svc-lrg2g [723.953949ms] Mar 13 14:19:42.456: INFO: Created: latency-svc-s6lxn Mar 13 14:19:42.461: INFO: Got endpoints: latency-svc-s6lxn [668.406887ms] Mar 13 14:19:42.491: INFO: Created: latency-svc-94vv5 Mar 13 14:19:42.497: INFO: Got endpoints: latency-svc-94vv5 [657.954221ms] Mar 13 14:19:42.556: INFO: Created: latency-svc-cz28x Mar 13 14:19:42.559: INFO: Got endpoints: latency-svc-cz28x [690.120573ms] Mar 13 14:19:42.578: INFO: Created: latency-svc-t8tx6 Mar 13 14:19:42.594: INFO: Got endpoints: latency-svc-t8tx6 [643.237515ms] Mar 13 14:19:42.618: INFO: Created: latency-svc-2pgfp Mar 13 14:19:42.623: INFO: Got endpoints: latency-svc-2pgfp [664.231226ms] Mar 13 14:19:42.642: INFO: Created: latency-svc-9chn2 Mar 13 14:19:42.706: INFO: Created: latency-svc-ktwqd Mar 13 14:19:42.706: INFO: Got endpoints: latency-svc-9chn2 [698.315773ms] Mar 13 14:19:42.708: INFO: Got endpoints: latency-svc-ktwqd [694.288321ms] Mar 13 14:19:42.735: INFO: Created: latency-svc-fklfm Mar 13 14:19:42.738: INFO: Got endpoints: latency-svc-fklfm [600.58003ms] Mar 13 14:19:42.757: INFO: Created: latency-svc-lp2bx Mar 13 14:19:42.780: INFO: Got endpoints: latency-svc-lp2bx [633.114344ms] Mar 13 14:19:42.849: INFO: Created: latency-svc-4sxj7 Mar 13 14:19:42.851: INFO: Got endpoints: latency-svc-4sxj7 [662.638111ms] Mar 13 14:19:42.871: INFO: Created: latency-svc-zqv7z Mar 13 14:19:42.880: INFO: Got endpoints: latency-svc-zqv7z [620.986141ms] Mar 13 14:19:42.903: INFO: Created: latency-svc-fdxck Mar 13 14:19:42.925: INFO: Got endpoints: latency-svc-fdxck [630.064296ms] Mar 13 14:19:42.926: INFO: Created: latency-svc-lrzpm Mar 13 14:19:42.929: INFO: Got endpoints: latency-svc-lrzpm [613.043878ms] Mar 13 14:19:42.993: INFO: Created: latency-svc-2gfmt Mar 13 14:19:43.020: INFO: Got endpoints: latency-svc-2gfmt [588.654094ms] Mar 13 14:19:43.023: INFO: Created: latency-svc-b47g7 Mar 13 14:19:43.039: INFO: Got endpoints: latency-svc-b47g7 [602.869092ms] Mar 13 14:19:43.057: INFO: Created: latency-svc-4cjv8 Mar 13 14:19:43.060: INFO: Got endpoints: latency-svc-4cjv8 [599.519829ms] Mar 13 14:19:43.081: INFO: Created: latency-svc-cvg2k Mar 13 14:19:43.085: INFO: Got endpoints: latency-svc-cvg2k [588.02105ms] Mar 13 14:19:43.268: INFO: Created: latency-svc-kgdms Mar 13 14:19:43.270: INFO: Got endpoints: latency-svc-kgdms [711.737404ms] Mar 13 14:19:43.297: INFO: Created: latency-svc-nwmdw Mar 13 14:19:43.331: INFO: Got endpoints: latency-svc-nwmdw [737.064761ms] Mar 13 14:19:43.367: INFO: Created: latency-svc-84fpb Mar 13 14:19:43.412: INFO: Got endpoints: latency-svc-84fpb [788.465732ms] Mar 13 14:19:43.428: INFO: Created: latency-svc-zzrbx Mar 13 14:19:43.434: INFO: Got endpoints: latency-svc-zzrbx [728.198828ms] Mar 13 14:19:43.460: INFO: Created: latency-svc-bfgn6 Mar 13 14:19:43.464: INFO: Got endpoints: latency-svc-bfgn6 [755.299196ms] Mar 13 14:19:43.489: INFO: Created: latency-svc-9nl4s Mar 13 14:19:43.494: INFO: Got endpoints: latency-svc-9nl4s [756.179214ms] Mar 13 14:19:43.550: INFO: Created: latency-svc-x8z9j Mar 13 14:19:43.552: INFO: Got endpoints: latency-svc-x8z9j [771.908922ms] Mar 13 14:19:43.578: INFO: Created: latency-svc-ftx7k Mar 13 14:19:43.591: INFO: Got endpoints: latency-svc-ftx7k [739.874012ms] Mar 13 14:19:43.613: INFO: Created: latency-svc-qzbr2 Mar 13 14:19:43.645: INFO: Got endpoints: latency-svc-qzbr2 [765.049904ms] Mar 13 14:19:43.693: INFO: Created: latency-svc-9hsm2 Mar 13 14:19:43.696: INFO: Got endpoints: latency-svc-9hsm2 [770.683375ms] Mar 13 14:19:43.722: INFO: Created: latency-svc-zjbrf Mar 13 14:19:43.730: INFO: Got endpoints: latency-svc-zjbrf [800.969124ms] Mar 13 14:19:43.752: INFO: Created: latency-svc-kzz4k Mar 13 14:19:43.773: INFO: Got endpoints: latency-svc-kzz4k [752.505761ms] Mar 13 14:19:43.789: INFO: Created: latency-svc-4rjr9 Mar 13 14:19:43.843: INFO: Got endpoints: latency-svc-4rjr9 [803.725475ms] Mar 13 14:19:43.849: INFO: Created: latency-svc-fs9x5 Mar 13 14:19:43.857: INFO: Got endpoints: latency-svc-fs9x5 [796.351508ms] Mar 13 14:19:43.884: INFO: Created: latency-svc-mk6w8 Mar 13 14:19:43.887: INFO: Got endpoints: latency-svc-mk6w8 [802.26367ms] Mar 13 14:19:43.908: INFO: Created: latency-svc-t8tdh Mar 13 14:19:43.911: INFO: Got endpoints: latency-svc-t8tdh [640.648461ms] Mar 13 14:19:43.932: INFO: Created: latency-svc-jllfg Mar 13 14:19:43.999: INFO: Got endpoints: latency-svc-jllfg [667.477263ms] Mar 13 14:19:44.005: INFO: Created: latency-svc-82rwp Mar 13 14:19:44.014: INFO: Got endpoints: latency-svc-82rwp [602.284623ms] Mar 13 14:19:44.040: INFO: Created: latency-svc-xtbj5 Mar 13 14:19:44.064: INFO: Got endpoints: latency-svc-xtbj5 [629.663261ms] Mar 13 14:19:44.082: INFO: Created: latency-svc-7xzs2 Mar 13 14:19:44.085: INFO: Got endpoints: latency-svc-7xzs2 [621.037144ms] Mar 13 14:19:44.149: INFO: Created: latency-svc-lt47l Mar 13 14:19:44.150: INFO: Got endpoints: latency-svc-lt47l [655.817371ms] Mar 13 14:19:44.179: INFO: Created: latency-svc-f994n Mar 13 14:19:44.187: INFO: Got endpoints: latency-svc-f994n [635.646111ms] Mar 13 14:19:44.214: INFO: Created: latency-svc-2h8xd Mar 13 14:19:44.224: INFO: Got endpoints: latency-svc-2h8xd [632.573883ms] Mar 13 14:19:44.304: INFO: Created: latency-svc-q7qkv Mar 13 14:19:44.308: INFO: Got endpoints: latency-svc-q7qkv [663.13253ms] Mar 13 14:19:44.329: INFO: Created: latency-svc-dc8f6 Mar 13 14:19:44.338: INFO: Got endpoints: latency-svc-dc8f6 [642.526201ms] Mar 13 14:19:44.359: INFO: Created: latency-svc-j4hp6 Mar 13 14:19:44.363: INFO: Got endpoints: latency-svc-j4hp6 [632.79028ms] Mar 13 14:19:44.382: INFO: Created: latency-svc-dh7ll Mar 13 14:19:44.400: INFO: Got endpoints: latency-svc-dh7ll [627.446244ms] Mar 13 14:19:44.442: INFO: Created: latency-svc-crwz9 Mar 13 14:19:44.461: INFO: Got endpoints: latency-svc-crwz9 [618.163018ms] Mar 13 14:19:44.462: INFO: Created: latency-svc-8pwbv Mar 13 14:19:44.479: INFO: Got endpoints: latency-svc-8pwbv [622.589406ms] Mar 13 14:19:44.496: INFO: Created: latency-svc-z6hx9 Mar 13 14:19:44.502: INFO: Got endpoints: latency-svc-z6hx9 [614.926179ms] Mar 13 14:19:44.520: INFO: Created: latency-svc-t9sww Mar 13 14:19:44.522: INFO: Got endpoints: latency-svc-t9sww [610.887945ms] Mar 13 14:19:44.580: INFO: Created: latency-svc-9tp4z Mar 13 14:19:44.605: INFO: Got endpoints: latency-svc-9tp4z [606.241991ms] Mar 13 14:19:44.606: INFO: Created: latency-svc-nx2hb Mar 13 14:19:44.611: INFO: Got endpoints: latency-svc-nx2hb [596.406149ms] Mar 13 14:19:44.630: INFO: Created: latency-svc-42k4d Mar 13 14:19:44.635: INFO: Got endpoints: latency-svc-42k4d [571.580437ms] Mar 13 14:19:44.653: INFO: Created: latency-svc-drrtr Mar 13 14:19:44.659: INFO: Got endpoints: latency-svc-drrtr [574.550102ms] Mar 13 14:19:44.677: INFO: Created: latency-svc-sqmnz Mar 13 14:19:44.723: INFO: Got endpoints: latency-svc-sqmnz [572.723467ms] Mar 13 14:19:44.736: INFO: Created: latency-svc-h2jmh Mar 13 14:19:44.754: INFO: Got endpoints: latency-svc-h2jmh [566.57424ms] Mar 13 14:19:44.773: INFO: Created: latency-svc-fgnm6 Mar 13 14:19:44.780: INFO: Got endpoints: latency-svc-fgnm6 [556.314861ms] Mar 13 14:19:44.797: INFO: Created: latency-svc-4zl84 Mar 13 14:19:44.805: INFO: Got endpoints: latency-svc-4zl84 [496.723659ms] Mar 13 14:19:44.834: INFO: Created: latency-svc-llqzq Mar 13 14:19:44.897: INFO: Got endpoints: latency-svc-llqzq [559.02137ms] Mar 13 14:19:45.147: INFO: Created: latency-svc-kmxzx Mar 13 14:19:45.154: INFO: Got endpoints: latency-svc-kmxzx [790.994682ms] Mar 13 14:19:45.197: INFO: Created: latency-svc-6cht5 Mar 13 14:19:45.222: INFO: Got endpoints: latency-svc-6cht5 [821.667938ms] Mar 13 14:19:45.223: INFO: Created: latency-svc-lslkr Mar 13 14:19:45.232: INFO: Got endpoints: latency-svc-lslkr [770.764063ms] Mar 13 14:19:45.292: INFO: Created: latency-svc-lzvg6 Mar 13 14:19:45.324: INFO: Got endpoints: latency-svc-lzvg6 [844.646917ms] Mar 13 14:19:45.324: INFO: Created: latency-svc-7t9q6 Mar 13 14:19:45.343: INFO: Got endpoints: latency-svc-7t9q6 [840.430181ms] Mar 13 14:19:45.379: INFO: Created: latency-svc-6xx95 Mar 13 14:19:45.382: INFO: Got endpoints: latency-svc-6xx95 [860.287359ms] Mar 13 14:19:45.442: INFO: Created: latency-svc-h2smb Mar 13 14:19:45.443: INFO: Got endpoints: latency-svc-h2smb [838.477903ms] Mar 13 14:19:45.470: INFO: Created: latency-svc-p5sdh Mar 13 14:19:45.473: INFO: Got endpoints: latency-svc-p5sdh [862.168273ms] Mar 13 14:19:45.493: INFO: Created: latency-svc-jgw89 Mar 13 14:19:45.497: INFO: Got endpoints: latency-svc-jgw89 [861.648787ms] Mar 13 14:19:45.516: INFO: Created: latency-svc-cqscs Mar 13 14:19:45.522: INFO: Got endpoints: latency-svc-cqscs [862.099744ms] Mar 13 14:19:45.542: INFO: Created: latency-svc-v92k7 Mar 13 14:19:45.597: INFO: Got endpoints: latency-svc-v92k7 [874.402282ms] Mar 13 14:19:45.599: INFO: Created: latency-svc-zwng9 Mar 13 14:19:45.607: INFO: Got endpoints: latency-svc-zwng9 [852.838366ms] Mar 13 14:19:45.626: INFO: Created: latency-svc-s8652 Mar 13 14:19:45.631: INFO: Got endpoints: latency-svc-s8652 [850.411279ms] Mar 13 14:19:45.648: INFO: Created: latency-svc-g7gln Mar 13 14:19:45.655: INFO: Got endpoints: latency-svc-g7gln [850.498029ms] Mar 13 14:19:45.679: INFO: Created: latency-svc-pbg5w Mar 13 14:19:45.741: INFO: Created: latency-svc-chvr8 Mar 13 14:19:45.742: INFO: Got endpoints: latency-svc-pbg5w [844.560431ms] Mar 13 14:19:45.776: INFO: Got endpoints: latency-svc-chvr8 [621.985599ms] Mar 13 14:19:45.776: INFO: Created: latency-svc-57x4l Mar 13 14:19:45.788: INFO: Got endpoints: latency-svc-57x4l [565.738091ms] Mar 13 14:19:45.806: INFO: Created: latency-svc-p8kcn Mar 13 14:19:45.819: INFO: Got endpoints: latency-svc-p8kcn [586.939369ms] Mar 13 14:19:45.841: INFO: Created: latency-svc-k9qfw Mar 13 14:19:45.891: INFO: Got endpoints: latency-svc-k9qfw [567.149818ms] Mar 13 14:19:45.892: INFO: Created: latency-svc-x78pf Mar 13 14:19:45.909: INFO: Got endpoints: latency-svc-x78pf [566.303455ms] Mar 13 14:19:45.949: INFO: Created: latency-svc-bk7h4 Mar 13 14:19:45.958: INFO: Got endpoints: latency-svc-bk7h4 [575.923085ms] Mar 13 14:19:45.979: INFO: Created: latency-svc-nb9ws Mar 13 14:19:46.029: INFO: Got endpoints: latency-svc-nb9ws [585.253682ms] Mar 13 14:19:46.040: INFO: Created: latency-svc-fklc4 Mar 13 14:19:46.047: INFO: Got endpoints: latency-svc-fklc4 [574.626599ms] Mar 13 14:19:46.077: INFO: Created: latency-svc-wfhgb Mar 13 14:19:46.090: INFO: Got endpoints: latency-svc-wfhgb [592.941601ms] Mar 13 14:19:46.111: INFO: Created: latency-svc-pc2mc Mar 13 14:19:46.115: INFO: Got endpoints: latency-svc-pc2mc [593.114477ms] Mar 13 14:19:46.166: INFO: Created: latency-svc-ksk98 Mar 13 14:19:46.169: INFO: Got endpoints: latency-svc-ksk98 [571.237723ms] Mar 13 14:19:46.196: INFO: Created: latency-svc-b2k6l Mar 13 14:19:46.205: INFO: Got endpoints: latency-svc-b2k6l [597.633728ms] Mar 13 14:19:46.227: INFO: Created: latency-svc-xxm5p Mar 13 14:19:46.236: INFO: Got endpoints: latency-svc-xxm5p [605.543784ms] Mar 13 14:19:46.258: INFO: Created: latency-svc-drrl8 Mar 13 14:19:46.328: INFO: Got endpoints: latency-svc-drrl8 [672.63843ms] Mar 13 14:19:46.329: INFO: Created: latency-svc-pqmdp Mar 13 14:19:46.352: INFO: Got endpoints: latency-svc-pqmdp [610.473195ms] Mar 13 14:19:46.386: INFO: Created: latency-svc-6j24m Mar 13 14:19:46.392: INFO: Got endpoints: latency-svc-6j24m [616.522971ms] Mar 13 14:19:46.411: INFO: Created: latency-svc-xsntt Mar 13 14:19:46.416: INFO: Got endpoints: latency-svc-xsntt [628.597662ms] Mar 13 14:19:46.472: INFO: Created: latency-svc-d7mj6 Mar 13 14:19:46.474: INFO: Got endpoints: latency-svc-d7mj6 [655.427865ms] Mar 13 14:19:46.502: INFO: Created: latency-svc-x2bmh Mar 13 14:19:46.507: INFO: Got endpoints: latency-svc-x2bmh [615.582408ms] Mar 13 14:19:46.527: INFO: Created: latency-svc-sj7ln Mar 13 14:19:46.532: INFO: Got endpoints: latency-svc-sj7ln [623.348254ms] Mar 13 14:19:46.561: INFO: Created: latency-svc-hfbrb Mar 13 14:19:46.568: INFO: Got endpoints: latency-svc-hfbrb [609.908797ms] Mar 13 14:19:46.610: INFO: Created: latency-svc-5qsh8 Mar 13 14:19:46.611: INFO: Got endpoints: latency-svc-5qsh8 [582.604538ms] Mar 13 14:19:46.634: INFO: Created: latency-svc-5srbf Mar 13 14:19:46.659: INFO: Got endpoints: latency-svc-5srbf [611.358226ms] Mar 13 14:19:46.659: INFO: Created: latency-svc-4drzp Mar 13 14:19:46.694: INFO: Got endpoints: latency-svc-4drzp [603.921212ms] Mar 13 14:19:46.694: INFO: Created: latency-svc-rnkp2 Mar 13 14:19:46.754: INFO: Got endpoints: latency-svc-rnkp2 [638.754063ms] Mar 13 14:19:46.771: INFO: Created: latency-svc-j7khr Mar 13 14:19:46.792: INFO: Got endpoints: latency-svc-j7khr [622.944636ms] Mar 13 14:19:46.792: INFO: Latencies: [71.817859ms 80.274866ms 92.247103ms 105.061524ms 132.886301ms 178.50934ms 210.00974ms 237.908004ms 261.863298ms 303.926897ms 339.246647ms 368.727874ms 393.045336ms 454.877928ms 467.551091ms 472.18648ms 496.723659ms 504.068906ms 512.80316ms 515.060345ms 533.489701ms 556.314861ms 559.02137ms 562.166586ms 565.738091ms 566.303455ms 566.57424ms 567.149818ms 571.237723ms 571.580437ms 572.723467ms 574.550102ms 574.626599ms 574.837822ms 575.165852ms 575.923085ms 576.600872ms 578.746519ms 580.5849ms 581.168617ms 581.22291ms 582.604538ms 583.149207ms 583.346945ms 585.253682ms 585.705104ms 586.05862ms 586.241654ms 586.248723ms 586.939369ms 587.931975ms 588.02105ms 588.654094ms 591.844448ms 592.123471ms 592.250352ms 592.271916ms 592.941601ms 593.114477ms 593.977905ms 594.789687ms 596.406149ms 596.471006ms 597.633728ms 598.22716ms 599.519829ms 600.322673ms 600.58003ms 601.714746ms 601.783869ms 602.284623ms 602.869092ms 603.921212ms 604.307723ms 604.327263ms 605.543784ms 606.241991ms 606.975641ms 608.823822ms 609.908797ms 610.473195ms 610.594206ms 610.887945ms 610.906576ms 611.358226ms 612.36008ms 613.043878ms 614.22572ms 614.926179ms 615.582408ms 615.903833ms 615.919409ms 616.522971ms 618.163018ms 619.119792ms 620.133147ms 620.986141ms 621.037144ms 621.666757ms 621.672602ms 621.985599ms 622.400099ms 622.589406ms 622.944636ms 623.348254ms 627.446244ms 628.597662ms 629.337634ms 629.663261ms 630.064296ms 632.573883ms 632.79028ms 633.103549ms 633.114344ms 635.646111ms 636.846849ms 638.754063ms 639.723653ms 640.110197ms 640.320985ms 640.574839ms 640.648461ms 642.526201ms 643.130116ms 643.237515ms 643.382166ms 645.171066ms 646.253666ms 646.294559ms 646.871192ms 647.209072ms 647.630343ms 648.958345ms 650.631785ms 651.814998ms 652.103977ms 652.193735ms 654.841816ms 655.427865ms 655.817371ms 656.977363ms 657.37703ms 657.954221ms 658.158687ms 658.279847ms 661.159325ms 662.638111ms 663.13253ms 664.231226ms 667.477263ms 668.406887ms 668.799679ms 670.621449ms 672.63843ms 674.09096ms 676.778749ms 686.958965ms 688.907714ms 690.120573ms 693.763918ms 694.288321ms 698.315773ms 701.559988ms 703.665953ms 704.747386ms 706.241551ms 708.116893ms 711.737404ms 719.254353ms 723.953949ms 728.198828ms 737.064761ms 737.476481ms 739.874012ms 752.505761ms 755.299196ms 756.179214ms 765.049904ms 770.683375ms 770.764063ms 771.908922ms 788.465732ms 790.994682ms 796.351508ms 800.969124ms 802.26367ms 803.725475ms 821.667938ms 838.477903ms 840.430181ms 844.560431ms 844.646917ms 850.411279ms 850.498029ms 852.838366ms 860.287359ms 861.648787ms 862.099744ms 862.168273ms 874.402282ms] Mar 13 14:19:46.792: INFO: 50 %ile: 621.985599ms Mar 13 14:19:46.792: INFO: 90 %ile: 771.908922ms Mar 13 14:19:46.792: INFO: 99 %ile: 862.168273ms Mar 13 14:19:46.792: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:19:46.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-43" for this suite. Mar 13 14:20:16.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:20:16.861: INFO: namespace svc-latency-43 deletion completed in 30.063947474s • [SLOW TEST:40.842 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:20:16.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 14:20:16.914: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.640998ms) Mar 13 14:20:16.916: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.979622ms) Mar 13 14:20:16.918: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.229306ms) Mar 13 14:20:16.920: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.003208ms) Mar 13 14:20:16.922: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.833563ms) Mar 13 14:20:16.945: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 23.357013ms) Mar 13 14:20:16.947: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.855818ms) Mar 13 14:20:16.949: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.626935ms) Mar 13 14:20:16.951: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.005817ms) Mar 13 14:20:16.952: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.633409ms) Mar 13 14:20:16.954: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.625344ms) Mar 13 14:20:16.956: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.609812ms) Mar 13 14:20:16.958: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.209205ms) Mar 13 14:20:16.960: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.362217ms) Mar 13 14:20:16.962: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.662267ms) Mar 13 14:20:16.964: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.651929ms) Mar 13 14:20:16.965: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.56814ms) Mar 13 14:20:16.967: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.649186ms) Mar 13 14:20:16.969: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.551484ms) Mar 13 14:20:16.970: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.573557ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:20:16.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8631" for this suite. Mar 13 14:20:22.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:20:23.066: INFO: namespace proxy-8631 deletion completed in 6.094115494s • [SLOW TEST:6.205 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:20:23.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 14:20:23.188: INFO: Creating deployment "nginx-deployment" Mar 13 14:20:23.192: INFO: Waiting for observed generation 1 Mar 13 14:20:25.253: INFO: Waiting for all required pods to come up Mar 13 14:20:25.257: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 13 14:20:27.266: INFO: Waiting for deployment "nginx-deployment" to complete Mar 13 14:20:27.270: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 13 14:20:27.275: INFO: Updating deployment nginx-deployment Mar 13 14:20:27.275: INFO: Waiting for observed generation 2 Mar 13 14:20:29.320: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 13 14:20:29.322: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 13 14:20:29.323: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 13 14:20:29.328: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 13 14:20:29.328: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 13 14:20:29.329: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 13 14:20:29.332: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 13 14:20:29.332: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 13 14:20:29.336: INFO: Updating deployment nginx-deployment Mar 13 14:20:29.336: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 13 14:20:29.393: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 13 14:20:29.457: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 13 14:20:31.544: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-12,SelfLink:/apis/apps/v1/namespaces/deployment-12/deployments/nginx-deployment,UID:1e15645d-5d15-4bd1-a0ed-bc1fbf0e975c,ResourceVersion:919149,Generation:3,CreationTimestamp:2020-03-13 14:20:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-03-13 14:20:29 +0000 UTC 2020-03-13 14:20:29 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-13 14:20:29 +0000 UTC 2020-03-13 14:20:23 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 13 14:20:31.546: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-12,SelfLink:/apis/apps/v1/namespaces/deployment-12/replicasets/nginx-deployment-55fb7cb77f,UID:679ec513-7c1f-4df5-83f1-34f5ff825f97,ResourceVersion:919138,Generation:3,CreationTimestamp:2020-03-13 14:20:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 1e15645d-5d15-4bd1-a0ed-bc1fbf0e975c 0xc002b2cba7 0xc002b2cba8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 13 14:20:31.546: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 13 14:20:31.546: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-12,SelfLink:/apis/apps/v1/namespaces/deployment-12/replicasets/nginx-deployment-7b8c6f4498,UID:8b064de2-1cab-4a07-95a6-59d899e039a4,ResourceVersion:919144,Generation:3,CreationTimestamp:2020-03-13 14:20:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 1e15645d-5d15-4bd1-a0ed-bc1fbf0e975c 0xc002b2cc77 0xc002b2cc78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 13 14:20:31.552: INFO: Pod "nginx-deployment-55fb7cb77f-5cnhk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5cnhk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-55fb7cb77f-5cnhk,UID:f618e08c-da58-45df-91f8-62100dbbd2d4,ResourceVersion:919057,Generation:0,CreationTimestamp:2020-03-13 14:20:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 679ec513-7c1f-4df5-83f1-34f5ff825f97 0xc0030f08c7 0xc0030f08c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f0940} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f0960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-13 14:20:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.552: INFO: Pod "nginx-deployment-55fb7cb77f-c9bb5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c9bb5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-55fb7cb77f-c9bb5,UID:67d35ef6-e941-4869-bda3-8c75cb91e471,ResourceVersion:919140,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 679ec513-7c1f-4df5-83f1-34f5ff825f97 0xc0030f0a30 0xc0030f0a31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f0ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f0ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.552: INFO: Pod "nginx-deployment-55fb7cb77f-cv689" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cv689,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-55fb7cb77f-cv689,UID:124991f5-2521-4ae1-a641-a8bf770074ee,ResourceVersion:919158,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 679ec513-7c1f-4df5-83f1-34f5ff825f97 0xc0030f0ba0 0xc0030f0ba1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f0c20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f0c40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.552: INFO: Pod "nginx-deployment-55fb7cb77f-cxhb5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cxhb5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-55fb7cb77f-cxhb5,UID:517beef6-44bb-463e-9b47-ae26a78dd043,ResourceVersion:919122,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 679ec513-7c1f-4df5-83f1-34f5ff825f97 0xc0030f0d10 0xc0030f0d11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f0d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f0db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.553: INFO: Pod "nginx-deployment-55fb7cb77f-lcp68" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lcp68,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-55fb7cb77f-lcp68,UID:15f58d1e-34a0-493a-a909-fda2c12d8947,ResourceVersion:919205,Generation:0,CreationTimestamp:2020-03-13 14:20:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 679ec513-7c1f-4df5-83f1-34f5ff825f97 0xc0030f0e30 0xc0030f0e31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f0eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f0ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.2.161,StartTime:2020-03-13 14:20:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.553: INFO: Pod "nginx-deployment-55fb7cb77f-lfdzw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lfdzw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-55fb7cb77f-lfdzw,UID:e7a43d2f-e196-493f-9645-6ea18d0ad5b3,ResourceVersion:919167,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 679ec513-7c1f-4df5-83f1-34f5ff825f97 0xc0030f0fc0 0xc0030f0fc1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f1040} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f1060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.553: INFO: Pod "nginx-deployment-55fb7cb77f-mdjvc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mdjvc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-55fb7cb77f-mdjvc,UID:7f92645c-ab21-4ea1-97df-8979827af5ba,ResourceVersion:919202,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 679ec513-7c1f-4df5-83f1-34f5ff825f97 0xc0030f1130 0xc0030f1131}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f11b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f11d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.553: INFO: Pod "nginx-deployment-55fb7cb77f-mktb7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mktb7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-55fb7cb77f-mktb7,UID:84a2e869-7874-44b3-9082-9416aeecc56f,ResourceVersion:919041,Generation:0,CreationTimestamp:2020-03-13 14:20:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 679ec513-7c1f-4df5-83f1-34f5ff825f97 0xc0030f12a0 0xc0030f12a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f1320} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f1340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-13 14:20:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.553: INFO: Pod "nginx-deployment-55fb7cb77f-mz29p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mz29p,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-55fb7cb77f-mz29p,UID:c7197b30-2dc3-4c8f-a4e9-a20d3932a51f,ResourceVersion:919194,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 679ec513-7c1f-4df5-83f1-34f5ff825f97 0xc0030f1410 0xc0030f1411}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f14b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f14e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.553: INFO: Pod "nginx-deployment-55fb7cb77f-r9vjd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r9vjd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-55fb7cb77f-r9vjd,UID:504fd621-dc06-4f90-bd5b-17e33ef08ec2,ResourceVersion:919055,Generation:0,CreationTimestamp:2020-03-13 14:20:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 679ec513-7c1f-4df5-83f1-34f5ff825f97 0xc0030f15b0 0xc0030f15b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f1630} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f1650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-13 14:20:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.553: INFO: Pod "nginx-deployment-55fb7cb77f-tpqbx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tpqbx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-55fb7cb77f-tpqbx,UID:3c537c54-e5f8-404f-9ec3-0aa11526ad6e,ResourceVersion:919200,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 679ec513-7c1f-4df5-83f1-34f5ff825f97 0xc0030f1720 0xc0030f1721}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f17a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f17c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.553: INFO: Pod "nginx-deployment-55fb7cb77f-w7mn8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w7mn8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-55fb7cb77f-w7mn8,UID:6fa119ec-3170-49a9-b9ff-702466b8c3cb,ResourceVersion:919139,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 679ec513-7c1f-4df5-83f1-34f5ff825f97 0xc0030f1890 0xc0030f1891}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f1910} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f1930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.554: INFO: Pod "nginx-deployment-55fb7cb77f-xt5s2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xt5s2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-55fb7cb77f-xt5s2,UID:c78fab38-b733-4720-8d9b-01b6588a806f,ResourceVersion:919050,Generation:0,CreationTimestamp:2020-03-13 14:20:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 679ec513-7c1f-4df5-83f1-34f5ff825f97 0xc0030f1a00 0xc0030f1a01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f1a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f1aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-13 14:20:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.554: INFO: Pod "nginx-deployment-7b8c6f4498-2x8gr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2x8gr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-2x8gr,UID:b28c4863-16bc-47ce-bac1-805176b0d939,ResourceVersion:919176,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc0030f1b70 0xc0030f1b71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f1be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f1c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.554: INFO: Pod "nginx-deployment-7b8c6f4498-4776w" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4776w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-4776w,UID:566b6255-484f-4e28-b95e-cfa78dc28f22,ResourceVersion:918976,Generation:0,CreationTimestamp:2020-03-13 14:20:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc0030f1cc0 0xc0030f1cc1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f1d40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f1d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.2.157,StartTime:2020-03-13 14:20:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-13 14:20:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://54341471e568c4dacc8888245d601f7fe85223c9d6b1f8f9e3f3e2221eb561ec}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.554: INFO: Pod "nginx-deployment-7b8c6f4498-549bd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-549bd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-549bd,UID:1a25424b-ed75-44eb-b474-00d4531d9f5b,ResourceVersion:918972,Generation:0,CreationTimestamp:2020-03-13 14:20:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc0030f1e30 0xc0030f1e31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030f1ea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030f1ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.2.159,StartTime:2020-03-13 14:20:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-13 14:20:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9209fb76632407091fb10058e68313927add5a6186f2c1a70a09b32819d7b255}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.554: INFO: Pod "nginx-deployment-7b8c6f4498-6k8tl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6k8tl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-6k8tl,UID:f74fbd30-b94b-4882-b115-09a838441e59,ResourceVersion:919124,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc0030f1f90 0xc0030f1f91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9e000} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9e020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.554: INFO: Pod "nginx-deployment-7b8c6f4498-84jx4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-84jx4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-84jx4,UID:6d6bf30d-82be-4186-bb0c-d83bb8b15726,ResourceVersion:919132,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9e0b0 0xc002b9e0b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9e130} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9e150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.554: INFO: Pod "nginx-deployment-7b8c6f4498-8vc46" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8vc46,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-8vc46,UID:4b546e67-5561-4ba4-94e4-0b3ed3486280,ResourceVersion:919196,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9e210 0xc002b9e211}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9e280} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9e2a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.555: INFO: Pod "nginx-deployment-7b8c6f4498-9cv66" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9cv66,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-9cv66,UID:19428928-e775-48de-83d3-934b03b4db85,ResourceVersion:919168,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9e360 0xc002b9e361}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9e600} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9e620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.555: INFO: Pod "nginx-deployment-7b8c6f4498-chkv5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-chkv5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-chkv5,UID:473f749d-4574-4d22-b472-47a789d51cad,ResourceVersion:918989,Generation:0,CreationTimestamp:2020-03-13 14:20:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9e6e0 0xc002b9e6e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9e750} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9e770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.2.156,StartTime:2020-03-13 14:20:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-13 14:20:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ab69765aa377e9aa10d2e0bf350c047014df770115f9a75a0530cc84479aaf10}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.555: INFO: Pod "nginx-deployment-7b8c6f4498-dvh9p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dvh9p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-dvh9p,UID:fd21c436-c660-470d-9e12-c41f800dac6a,ResourceVersion:919190,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9e840 0xc002b9e841}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9e8b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9e8d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.555: INFO: Pod "nginx-deployment-7b8c6f4498-gfl5m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gfl5m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-gfl5m,UID:c1972831-adea-4dbf-ae86-b883239dc197,ResourceVersion:918980,Generation:0,CreationTimestamp:2020-03-13 14:20:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9e990 0xc002b9e991}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9ea00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9ea20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.2.160,StartTime:2020-03-13 14:20:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-13 14:20:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://89476794c6f482af185fec9e95968527ce91bd6e59d4df462e278ddb46b67c60}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.555: INFO: Pod "nginx-deployment-7b8c6f4498-gh5xj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gh5xj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-gh5xj,UID:be8a962e-c00e-4396-b8f9-49b52187f38e,ResourceVersion:918984,Generation:0,CreationTimestamp:2020-03-13 14:20:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9eaf0 0xc002b9eaf1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9eb60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9eb80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.2.158,StartTime:2020-03-13 14:20:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-13 14:20:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b8b458521eef1e825ad4801b357369a6f999ceeb6ab2a13aad42c05dcd3e67bd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.555: INFO: Pod "nginx-deployment-7b8c6f4498-h56vd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h56vd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-h56vd,UID:7c384a97-52bc-4eff-bcb2-8dfd7a0d0f89,ResourceVersion:919185,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9ec50 0xc002b9ec51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9ecc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9ece0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.556: INFO: Pod "nginx-deployment-7b8c6f4498-h84lg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h84lg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-h84lg,UID:77c7e7bf-bb6b-4c40-b642-629e59614593,ResourceVersion:919153,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9eda0 0xc002b9eda1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9ee10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9ee30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.556: INFO: Pod "nginx-deployment-7b8c6f4498-qc7wb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qc7wb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-qc7wb,UID:238d8101-0c9a-4fb8-a168-742b8f106ef8,ResourceVersion:919004,Generation:0,CreationTimestamp:2020-03-13 14:20:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9eef0 0xc002b9eef1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9ef70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9ef90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.1.67,StartTime:2020-03-13 14:20:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-13 14:20:26 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8a080b653e918e4dfff6efea9f73d11600f653665c41125323d1f4ba66caa4d6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.556: INFO: Pod "nginx-deployment-7b8c6f4498-s8rjh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s8rjh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-s8rjh,UID:bb2e7c9c-e69f-4840-8693-fc277041ce1b,ResourceVersion:919173,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9f070 0xc002b9f071}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9f0e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9f100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.556: INFO: Pod "nginx-deployment-7b8c6f4498-tb5g9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tb5g9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-tb5g9,UID:d40672dc-2ce8-4a70-aed5-a6888bb2cdd7,ResourceVersion:919159,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9f1d0 0xc002b9f1d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9f240} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9f260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.556: INFO: Pod "nginx-deployment-7b8c6f4498-w7kjc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w7kjc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-w7kjc,UID:995a6afa-4d79-4ac0-a097-eaccce08ebcc,ResourceVersion:919007,Generation:0,CreationTimestamp:2020-03-13 14:20:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9f330 0xc002b9f331}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9f3a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9f3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.1.66,StartTime:2020-03-13 14:20:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-13 14:20:26 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2ff43d42291d17e0b7696c7c70bdc48718f6201bb07d9d4d7fd2e30b9adcd2d6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.556: INFO: Pod "nginx-deployment-7b8c6f4498-ztf6s" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ztf6s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-ztf6s,UID:cf5089fb-7110-4662-8805-e4c6c1fea90c,ResourceVersion:918970,Generation:0,CreationTimestamp:2020-03-13 14:20:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9f490 0xc002b9f491}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9f500} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9f520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.1.64,StartTime:2020-03-13 14:20:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-13 14:20:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c5186d6d6efbfa31c7f18d519408ef38a882ce85c6bc448830a826fbbc7c1014}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.556: INFO: Pod "nginx-deployment-7b8c6f4498-zwx4t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zwx4t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-zwx4t,UID:3372ea6c-4b35-4243-8b9a-26f123586830,ResourceVersion:919127,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9f600 0xc002b9f601}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9f670} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9f690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 13 14:20:31.557: INFO: Pod "nginx-deployment-7b8c6f4498-zz4nv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zz4nv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-12,SelfLink:/api/v1/namespaces/deployment-12/pods/nginx-deployment-7b8c6f4498-zz4nv,UID:9ee88227-c240-4f58-a71b-88f74cd9a1bb,ResourceVersion:919152,Generation:0,CreationTimestamp:2020-03-13 14:20:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b064de2-1cab-4a07-95a6-59d899e039a4 0xc002b9f760 0xc002b9f761}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hwvzx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hwvzx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hwvzx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b9f7d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b9f7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-13 14:20:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:20:31.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-12" for this suite. Mar 13 14:20:39.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:20:39.662: INFO: namespace deployment-12 deletion completed in 8.096989576s • [SLOW TEST:16.595 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:20:39.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 13 14:20:43.765: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-3e629fb5-352a-4f1f-98dd-5469dc1d366a,GenerateName:,Namespace:events-8811,SelfLink:/api/v1/namespaces/events-8811/pods/send-events-3e629fb5-352a-4f1f-98dd-5469dc1d366a,UID:82cbecdb-1378-4d3d-a1b5-5f615c776a19,ResourceVersion:919474,Generation:0,CreationTimestamp:2020-03-13 14:20:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 735090155,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zldtj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zldtj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-zldtj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f85f40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f85f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-13 14:20:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.2.173,StartTime:2020-03-13 14:20:39 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-13 14:20:41 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://554a9abd322448fc2c7457fdafd4f7c8e6e7a40fed38469aef138527ddb1c99a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 13 14:20:45.769: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 13 14:20:47.773: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:20:47.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8811" for this suite. Mar 13 14:21:25.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:21:25.897: INFO: namespace events-8811 deletion completed in 38.109667776s • [SLOW TEST:46.234 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:21:25.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:21:32.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9231" for this suite. Mar 13 14:21:38.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:21:38.249: INFO: namespace namespaces-9231 deletion completed in 6.087051298s STEP: Destroying namespace "nsdeletetest-804" for this suite. Mar 13 14:21:38.251: INFO: Namespace nsdeletetest-804 was already deleted STEP: Destroying namespace "nsdeletetest-4939" for this suite. Mar 13 14:21:44.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:21:44.360: INFO: namespace nsdeletetest-4939 deletion completed in 6.109576539s • [SLOW TEST:18.463 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:21:44.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 13 14:21:44.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9339' Mar 13 14:21:44.497: INFO: stderr: "" Mar 13 14:21:44.497: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Mar 13 14:21:44.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9339' Mar 13 14:21:54.462: INFO: stderr: "" Mar 13 14:21:54.462: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:21:54.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9339" for this suite. Mar 13 14:22:00.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:22:00.562: INFO: namespace kubectl-9339 deletion completed in 6.085666225s • [SLOW TEST:16.202 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:22:00.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-a7eaba06-96a5-4600-96a1-770a28272f39 STEP: Creating a pod to test consume configMaps Mar 13 14:22:00.643: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4ca437d7-c21c-4bea-8326-93b686a14df6" in namespace "projected-371" to be "success or failure" Mar 13 14:22:00.671: INFO: Pod "pod-projected-configmaps-4ca437d7-c21c-4bea-8326-93b686a14df6": Phase="Pending", Reason="", readiness=false. Elapsed: 27.944451ms Mar 13 14:22:02.675: INFO: Pod "pod-projected-configmaps-4ca437d7-c21c-4bea-8326-93b686a14df6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031840947s Mar 13 14:22:04.678: INFO: Pod "pod-projected-configmaps-4ca437d7-c21c-4bea-8326-93b686a14df6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035447188s STEP: Saw pod success Mar 13 14:22:04.679: INFO: Pod "pod-projected-configmaps-4ca437d7-c21c-4bea-8326-93b686a14df6" satisfied condition "success or failure" Mar 13 14:22:04.681: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-4ca437d7-c21c-4bea-8326-93b686a14df6 container projected-configmap-volume-test: STEP: delete the pod Mar 13 14:22:04.701: INFO: Waiting for pod pod-projected-configmaps-4ca437d7-c21c-4bea-8326-93b686a14df6 to disappear Mar 13 14:22:04.717: INFO: Pod pod-projected-configmaps-4ca437d7-c21c-4bea-8326-93b686a14df6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:22:04.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-371" for this suite. Mar 13 14:22:10.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:22:10.812: INFO: namespace projected-371 deletion completed in 6.091592209s • [SLOW TEST:10.250 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:22:10.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0313 14:22:20.884793 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 13 14:22:20.884: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:22:20.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6581" for this suite. Mar 13 14:22:26.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:22:26.968: INFO: namespace gc-6581 deletion completed in 6.080733181s • [SLOW TEST:16.156 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:22:26.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Mar 13 14:22:27.007: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 13 14:22:27.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1514' Mar 13 14:22:27.325: INFO: stderr: "" Mar 13 14:22:27.325: INFO: stdout: "service/redis-slave created\n" Mar 13 14:22:27.326: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 13 14:22:27.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1514' Mar 13 14:22:27.580: INFO: stderr: "" Mar 13 14:22:27.580: INFO: stdout: "service/redis-master created\n" Mar 13 14:22:27.580: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 13 14:22:27.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1514' Mar 13 14:22:27.809: INFO: stderr: "" Mar 13 14:22:27.809: INFO: stdout: "service/frontend created\n" Mar 13 14:22:27.809: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 13 14:22:27.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1514' Mar 13 14:22:28.028: INFO: stderr: "" Mar 13 14:22:28.028: INFO: stdout: "deployment.apps/frontend created\n" Mar 13 14:22:28.028: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 13 14:22:28.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1514' Mar 13 14:22:28.236: INFO: stderr: "" Mar 13 14:22:28.236: INFO: stdout: "deployment.apps/redis-master created\n" Mar 13 14:22:28.236: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 13 14:22:28.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1514' Mar 13 14:22:28.443: INFO: stderr: "" Mar 13 14:22:28.443: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Mar 13 14:22:28.443: INFO: Waiting for all frontend pods to be Running. Mar 13 14:22:33.493: INFO: Waiting for frontend to serve content. Mar 13 14:22:33.509: INFO: Trying to add a new entry to the guestbook. Mar 13 14:22:33.519: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 13 14:22:33.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1514' Mar 13 14:22:33.674: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 13 14:22:33.674: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 13 14:22:33.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1514' Mar 13 14:22:33.794: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 13 14:22:33.794: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 13 14:22:33.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1514' Mar 13 14:22:33.912: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 13 14:22:33.912: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 13 14:22:33.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1514' Mar 13 14:22:33.990: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 13 14:22:33.990: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 13 14:22:33.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1514' Mar 13 14:22:34.075: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 13 14:22:34.075: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 13 14:22:34.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1514' Mar 13 14:22:34.145: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 13 14:22:34.145: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:22:34.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1514" for this suite. Mar 13 14:23:16.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:23:16.277: INFO: namespace kubectl-1514 deletion completed in 42.120268836s • [SLOW TEST:49.308 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:23:16.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 13 14:23:16.415: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 14:23:16.418: INFO: Number of nodes with available pods: 0 Mar 13 14:23:16.418: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:23:17.423: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 14:23:17.426: INFO: Number of nodes with available pods: 0 Mar 13 14:23:17.426: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:23:18.422: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 14:23:18.432: INFO: Number of nodes with available pods: 2 Mar 13 14:23:18.432: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 13 14:23:18.463: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 14:23:18.481: INFO: Number of nodes with available pods: 2 Mar 13 14:23:18.481: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1842, will wait for the garbage collector to delete the pods Mar 13 14:23:19.571: INFO: Deleting DaemonSet.extensions daemon-set took: 26.630772ms Mar 13 14:23:19.871: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.23859ms Mar 13 14:23:34.374: INFO: Number of nodes with available pods: 0 Mar 13 14:23:34.374: INFO: Number of running nodes: 0, number of available pods: 0 Mar 13 14:23:34.376: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1842/daemonsets","resourceVersion":"920209"},"items":null} Mar 13 14:23:34.378: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1842/pods","resourceVersion":"920209"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:23:34.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1842" for this suite. Mar 13 14:23:40.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:23:40.457: INFO: namespace daemonsets-1842 deletion completed in 6.070223035s • [SLOW TEST:24.180 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:23:40.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-pc7k STEP: Creating a pod to test atomic-volume-subpath Mar 13 14:23:40.535: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-pc7k" in namespace "subpath-5010" to be "success or failure" Mar 13 14:23:40.538: INFO: Pod "pod-subpath-test-downwardapi-pc7k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.525882ms Mar 13 14:23:42.541: INFO: Pod "pod-subpath-test-downwardapi-pc7k": Phase="Running", Reason="", readiness=true. Elapsed: 2.005628581s Mar 13 14:23:44.559: INFO: Pod "pod-subpath-test-downwardapi-pc7k": Phase="Running", Reason="", readiness=true. Elapsed: 4.023793653s Mar 13 14:23:46.562: INFO: Pod "pod-subpath-test-downwardapi-pc7k": Phase="Running", Reason="", readiness=true. Elapsed: 6.026787809s Mar 13 14:23:48.565: INFO: Pod "pod-subpath-test-downwardapi-pc7k": Phase="Running", Reason="", readiness=true. Elapsed: 8.03023467s Mar 13 14:23:50.568: INFO: Pod "pod-subpath-test-downwardapi-pc7k": Phase="Running", Reason="", readiness=true. Elapsed: 10.033310309s Mar 13 14:23:52.589: INFO: Pod "pod-subpath-test-downwardapi-pc7k": Phase="Running", Reason="", readiness=true. Elapsed: 12.053622124s Mar 13 14:23:54.592: INFO: Pod "pod-subpath-test-downwardapi-pc7k": Phase="Running", Reason="", readiness=true. Elapsed: 14.056744691s Mar 13 14:23:56.595: INFO: Pod "pod-subpath-test-downwardapi-pc7k": Phase="Running", Reason="", readiness=true. Elapsed: 16.059694217s Mar 13 14:23:58.607: INFO: Pod "pod-subpath-test-downwardapi-pc7k": Phase="Running", Reason="", readiness=true. Elapsed: 18.071931915s Mar 13 14:24:00.610: INFO: Pod "pod-subpath-test-downwardapi-pc7k": Phase="Running", Reason="", readiness=true. Elapsed: 20.074856491s Mar 13 14:24:02.613: INFO: Pod "pod-subpath-test-downwardapi-pc7k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.077688753s STEP: Saw pod success Mar 13 14:24:02.613: INFO: Pod "pod-subpath-test-downwardapi-pc7k" satisfied condition "success or failure" Mar 13 14:24:02.615: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-pc7k container test-container-subpath-downwardapi-pc7k: STEP: delete the pod Mar 13 14:24:02.648: INFO: Waiting for pod pod-subpath-test-downwardapi-pc7k to disappear Mar 13 14:24:02.673: INFO: Pod pod-subpath-test-downwardapi-pc7k no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-pc7k Mar 13 14:24:02.673: INFO: Deleting pod "pod-subpath-test-downwardapi-pc7k" in namespace "subpath-5010" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:24:02.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5010" for this suite. Mar 13 14:24:08.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:24:08.775: INFO: namespace subpath-5010 deletion completed in 6.097239936s • [SLOW TEST:28.318 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:24:08.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 13 14:24:08.836: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 13 14:24:08.844: INFO: Number of nodes with available pods: 0 Mar 13 14:24:08.844: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 13 14:24:08.913: INFO: Number of nodes with available pods: 0 Mar 13 14:24:08.913: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:09.915: INFO: Number of nodes with available pods: 0 Mar 13 14:24:09.916: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:10.917: INFO: Number of nodes with available pods: 1 Mar 13 14:24:10.917: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 13 14:24:10.946: INFO: Number of nodes with available pods: 1 Mar 13 14:24:10.946: INFO: Number of running nodes: 0, number of available pods: 1 Mar 13 14:24:11.950: INFO: Number of nodes with available pods: 0 Mar 13 14:24:11.950: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 13 14:24:11.979: INFO: Number of nodes with available pods: 0 Mar 13 14:24:11.979: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:12.982: INFO: Number of nodes with available pods: 0 Mar 13 14:24:12.982: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:13.981: INFO: Number of nodes with available pods: 0 Mar 13 14:24:13.981: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:14.984: INFO: Number of nodes with available pods: 0 Mar 13 14:24:14.984: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:15.983: INFO: Number of nodes with available pods: 0 Mar 13 14:24:15.983: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:16.982: INFO: Number of nodes with available pods: 0 Mar 13 14:24:16.982: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:17.984: INFO: Number of nodes with available pods: 0 Mar 13 14:24:17.984: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:18.982: INFO: Number of nodes with available pods: 0 Mar 13 14:24:18.982: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:19.983: INFO: Number of nodes with available pods: 0 Mar 13 14:24:19.983: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:20.983: INFO: Number of nodes with available pods: 0 Mar 13 14:24:20.983: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:21.983: INFO: Number of nodes with available pods: 0 Mar 13 14:24:21.983: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:22.983: INFO: Number of nodes with available pods: 0 Mar 13 14:24:22.983: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:23.985: INFO: Number of nodes with available pods: 0 Mar 13 14:24:23.985: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:24.982: INFO: Number of nodes with available pods: 0 Mar 13 14:24:24.982: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:25.983: INFO: Number of nodes with available pods: 0 Mar 13 14:24:25.983: INFO: Node iruya-worker is running more than one daemon pod Mar 13 14:24:26.983: INFO: Number of nodes with available pods: 1 Mar 13 14:24:26.983: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-13, will wait for the garbage collector to delete the pods Mar 13 14:24:27.051: INFO: Deleting DaemonSet.extensions daemon-set took: 6.426354ms Mar 13 14:24:27.351: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.222039ms Mar 13 14:24:34.354: INFO: Number of nodes with available pods: 0 Mar 13 14:24:34.354: INFO: Number of running nodes: 0, number of available pods: 0 Mar 13 14:24:34.356: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-13/daemonsets","resourceVersion":"920429"},"items":null} Mar 13 14:24:34.357: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-13/pods","resourceVersion":"920429"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:24:34.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-13" for this suite. Mar 13 14:24:40.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:24:40.481: INFO: namespace daemonsets-13 deletion completed in 6.094224133s • [SLOW TEST:31.706 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:24:40.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 13 14:24:43.078: INFO: Successfully updated pod "labelsupdate88274b99-7cb6-4f65-b016-bbf05e932fdc" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:24:45.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6647" for this suite. Mar 13 14:25:07.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:25:07.177: INFO: namespace downward-api-6647 deletion completed in 22.083322855s • [SLOW TEST:26.696 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:25:07.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 13 14:25:07.262: INFO: Waiting up to 5m0s for pod "downward-api-1f819671-6bde-4354-b290-e4d8112df63f" in namespace "downward-api-3589" to be "success or failure" Mar 13 14:25:07.286: INFO: Pod "downward-api-1f819671-6bde-4354-b290-e4d8112df63f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.588614ms Mar 13 14:25:09.289: INFO: Pod "downward-api-1f819671-6bde-4354-b290-e4d8112df63f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026558985s STEP: Saw pod success Mar 13 14:25:09.289: INFO: Pod "downward-api-1f819671-6bde-4354-b290-e4d8112df63f" satisfied condition "success or failure" Mar 13 14:25:09.291: INFO: Trying to get logs from node iruya-worker2 pod downward-api-1f819671-6bde-4354-b290-e4d8112df63f container dapi-container: STEP: delete the pod Mar 13 14:25:09.315: INFO: Waiting for pod downward-api-1f819671-6bde-4354-b290-e4d8112df63f to disappear Mar 13 14:25:09.320: INFO: Pod downward-api-1f819671-6bde-4354-b290-e4d8112df63f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:25:09.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3589" for this suite. Mar 13 14:25:15.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:25:15.420: INFO: namespace downward-api-3589 deletion completed in 6.096441865s • [SLOW TEST:8.243 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:25:15.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-2874b415-2737-4110-a1d2-30c775600ff1 in namespace container-probe-742 Mar 13 14:25:17.501: INFO: Started pod busybox-2874b415-2737-4110-a1d2-30c775600ff1 in namespace container-probe-742 STEP: checking the pod's current state and verifying that restartCount is present Mar 13 14:25:17.504: INFO: Initial restart count of pod busybox-2874b415-2737-4110-a1d2-30c775600ff1 is 0 Mar 13 14:26:03.589: INFO: Restart count of pod container-probe-742/busybox-2874b415-2737-4110-a1d2-30c775600ff1 is now 1 (46.085116903s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:26:03.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-742" for this suite. Mar 13 14:26:09.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:26:09.724: INFO: namespace container-probe-742 deletion completed in 6.083753943s • [SLOW TEST:54.303 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:26:09.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 13 14:26:13.838: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 13 14:26:13.843: INFO: Pod pod-with-poststart-http-hook still exists Mar 13 14:26:15.844: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 13 14:26:15.847: INFO: Pod pod-with-poststart-http-hook still exists Mar 13 14:26:17.844: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 13 14:26:17.848: INFO: Pod pod-with-poststart-http-hook still exists Mar 13 14:26:19.844: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 13 14:26:19.847: INFO: Pod pod-with-poststart-http-hook still exists Mar 13 14:26:21.844: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 13 14:26:21.847: INFO: Pod pod-with-poststart-http-hook still exists Mar 13 14:26:23.844: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 13 14:26:23.848: INFO: Pod pod-with-poststart-http-hook still exists Mar 13 14:26:25.844: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 13 14:26:25.848: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:26:25.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6278" for this suite. Mar 13 14:26:47.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:26:47.940: INFO: namespace container-lifecycle-hook-6278 deletion completed in 22.089204311s • [SLOW TEST:38.217 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:26:47.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 13 14:26:48.002: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:26:51.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2214" for this suite. Mar 13 14:26:57.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:26:57.546: INFO: namespace init-container-2214 deletion completed in 6.127275615s • [SLOW TEST:9.605 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:26:57.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0313 14:27:07.712104 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 13 14:27:07.712: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:27:07.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5196" for this suite. Mar 13 14:27:13.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:27:13.812: INFO: namespace gc-5196 deletion completed in 6.096650532s • [SLOW TEST:16.265 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:27:13.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:27:13.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4955" for this suite. Mar 13 14:27:19.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:27:20.009: INFO: namespace kubelet-test-4955 deletion completed in 6.08806606s • [SLOW TEST:6.197 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:27:20.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2746 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2746 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2746 Mar 13 14:27:20.084: INFO: Found 0 stateful pods, waiting for 1 Mar 13 14:27:30.087: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 13 14:27:30.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2746 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 13 14:27:32.267: INFO: stderr: "I0313 14:27:32.173708 3521 log.go:172] (0xc000116c60) (0xc0005c28c0) Create stream\nI0313 14:27:32.173732 3521 log.go:172] (0xc000116c60) (0xc0005c28c0) Stream added, broadcasting: 1\nI0313 14:27:32.175520 3521 log.go:172] (0xc000116c60) Reply frame received for 1\nI0313 14:27:32.175543 3521 log.go:172] (0xc000116c60) (0xc0005c2960) Create stream\nI0313 14:27:32.175549 3521 log.go:172] (0xc000116c60) (0xc0005c2960) Stream added, broadcasting: 3\nI0313 14:27:32.176178 3521 log.go:172] (0xc000116c60) Reply frame received for 3\nI0313 14:27:32.176200 3521 log.go:172] (0xc000116c60) (0xc0006a1a40) Create stream\nI0313 14:27:32.176207 3521 log.go:172] (0xc000116c60) (0xc0006a1a40) Stream added, broadcasting: 5\nI0313 14:27:32.176734 3521 log.go:172] (0xc000116c60) Reply frame received for 5\nI0313 14:27:32.238950 3521 log.go:172] (0xc000116c60) Data frame received for 5\nI0313 14:27:32.238968 3521 log.go:172] (0xc0006a1a40) (5) Data frame handling\nI0313 14:27:32.238978 3521 log.go:172] (0xc0006a1a40) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0313 14:27:32.262063 3521 log.go:172] (0xc000116c60) Data frame received for 3\nI0313 14:27:32.262080 3521 log.go:172] (0xc0005c2960) (3) Data frame handling\nI0313 14:27:32.262091 3521 log.go:172] (0xc0005c2960) (3) Data frame sent\nI0313 14:27:32.262474 3521 log.go:172] (0xc000116c60) Data frame received for 3\nI0313 14:27:32.262491 3521 log.go:172] (0xc0005c2960) (3) Data frame handling\nI0313 14:27:32.262506 3521 log.go:172] (0xc000116c60) Data frame received for 5\nI0313 14:27:32.262517 3521 log.go:172] (0xc0006a1a40) (5) Data frame handling\nI0313 14:27:32.263613 3521 log.go:172] (0xc000116c60) Data frame received for 1\nI0313 14:27:32.263625 3521 log.go:172] (0xc0005c28c0) (1) Data frame handling\nI0313 14:27:32.263634 3521 log.go:172] (0xc0005c28c0) (1) Data frame sent\nI0313 14:27:32.263644 3521 log.go:172] (0xc000116c60) (0xc0005c28c0) Stream removed, broadcasting: 1\nI0313 14:27:32.263788 3521 log.go:172] (0xc000116c60) Go away received\nI0313 14:27:32.263908 3521 log.go:172] (0xc000116c60) (0xc0005c28c0) Stream removed, broadcasting: 1\nI0313 14:27:32.263921 3521 log.go:172] (0xc000116c60) (0xc0005c2960) Stream removed, broadcasting: 3\nI0313 14:27:32.263929 3521 log.go:172] (0xc000116c60) (0xc0006a1a40) Stream removed, broadcasting: 5\n" Mar 13 14:27:32.267: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 13 14:27:32.267: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 13 14:27:32.280: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 13 14:27:42.298: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 13 14:27:42.298: INFO: Waiting for statefulset status.replicas updated to 0 Mar 13 14:27:42.312: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999628s Mar 13 14:27:43.315: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992067904s Mar 13 14:27:44.319: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.989217098s Mar 13 14:27:45.323: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.985708291s Mar 13 14:27:46.341: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.981443234s Mar 13 14:27:47.344: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.963769067s Mar 13 14:27:48.359: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.959879694s Mar 13 14:27:49.363: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.945461268s Mar 13 14:27:50.366: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.94173616s Mar 13 14:27:51.369: INFO: Verifying statefulset ss doesn't scale past 1 for another 938.682329ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2746 Mar 13 14:27:52.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:27:52.560: INFO: stderr: "I0313 14:27:52.492329 3552 log.go:172] (0xc0009e6420) (0xc000a56640) Create stream\nI0313 14:27:52.492372 3552 log.go:172] (0xc0009e6420) (0xc000a56640) Stream added, broadcasting: 1\nI0313 14:27:52.493990 3552 log.go:172] (0xc0009e6420) Reply frame received for 1\nI0313 14:27:52.494022 3552 log.go:172] (0xc0009e6420) (0xc000a66000) Create stream\nI0313 14:27:52.494033 3552 log.go:172] (0xc0009e6420) (0xc000a66000) Stream added, broadcasting: 3\nI0313 14:27:52.494694 3552 log.go:172] (0xc0009e6420) Reply frame received for 3\nI0313 14:27:52.494714 3552 log.go:172] (0xc0009e6420) (0xc0005b0280) Create stream\nI0313 14:27:52.494722 3552 log.go:172] (0xc0009e6420) (0xc0005b0280) Stream added, broadcasting: 5\nI0313 14:27:52.495341 3552 log.go:172] (0xc0009e6420) Reply frame received for 5\nI0313 14:27:52.555919 3552 log.go:172] (0xc0009e6420) Data frame received for 5\nI0313 14:27:52.555940 3552 log.go:172] (0xc0005b0280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0313 14:27:52.555971 3552 log.go:172] (0xc0009e6420) Data frame received for 3\nI0313 14:27:52.556017 3552 log.go:172] (0xc000a66000) (3) Data frame handling\nI0313 14:27:52.556038 3552 log.go:172] (0xc000a66000) (3) Data frame sent\nI0313 14:27:52.556054 3552 log.go:172] (0xc0009e6420) Data frame received for 3\nI0313 14:27:52.556068 3552 log.go:172] (0xc000a66000) (3) Data frame handling\nI0313 14:27:52.556087 3552 log.go:172] (0xc0005b0280) (5) Data frame sent\nI0313 14:27:52.556096 3552 log.go:172] (0xc0009e6420) Data frame received for 5\nI0313 14:27:52.556103 3552 log.go:172] (0xc0005b0280) (5) Data frame handling\nI0313 14:27:52.557271 3552 log.go:172] (0xc0009e6420) Data frame received for 1\nI0313 14:27:52.557285 3552 log.go:172] (0xc000a56640) (1) Data frame handling\nI0313 14:27:52.557291 3552 log.go:172] (0xc000a56640) (1) Data frame sent\nI0313 14:27:52.557306 3552 log.go:172] (0xc0009e6420) (0xc000a56640) Stream removed, broadcasting: 1\nI0313 14:27:52.557323 3552 log.go:172] (0xc0009e6420) Go away received\nI0313 14:27:52.557569 3552 log.go:172] (0xc0009e6420) (0xc000a56640) Stream removed, broadcasting: 1\nI0313 14:27:52.557585 3552 log.go:172] (0xc0009e6420) (0xc000a66000) Stream removed, broadcasting: 3\nI0313 14:27:52.557591 3552 log.go:172] (0xc0009e6420) (0xc0005b0280) Stream removed, broadcasting: 5\n" Mar 13 14:27:52.560: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 13 14:27:52.560: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 13 14:27:52.564: INFO: Found 1 stateful pods, waiting for 3 Mar 13 14:28:02.568: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 13 14:28:02.568: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 13 14:28:02.568: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 13 14:28:02.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2746 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 13 14:28:02.768: INFO: stderr: "I0313 14:28:02.700437 3572 log.go:172] (0xc000a6a4d0) (0xc0007a0aa0) Create stream\nI0313 14:28:02.700484 3572 log.go:172] (0xc000a6a4d0) (0xc0007a0aa0) Stream added, broadcasting: 1\nI0313 14:28:02.702524 3572 log.go:172] (0xc000a6a4d0) Reply frame received for 1\nI0313 14:28:02.702562 3572 log.go:172] (0xc000a6a4d0) (0xc0008c2000) Create stream\nI0313 14:28:02.702576 3572 log.go:172] (0xc000a6a4d0) (0xc0008c2000) Stream added, broadcasting: 3\nI0313 14:28:02.703351 3572 log.go:172] (0xc000a6a4d0) Reply frame received for 3\nI0313 14:28:02.703402 3572 log.go:172] (0xc000a6a4d0) (0xc000928000) Create stream\nI0313 14:28:02.703417 3572 log.go:172] (0xc000a6a4d0) (0xc000928000) Stream added, broadcasting: 5\nI0313 14:28:02.704690 3572 log.go:172] (0xc000a6a4d0) Reply frame received for 5\nI0313 14:28:02.764373 3572 log.go:172] (0xc000a6a4d0) Data frame received for 5\nI0313 14:28:02.764402 3572 log.go:172] (0xc000928000) (5) Data frame handling\nI0313 14:28:02.764411 3572 log.go:172] (0xc000928000) (5) Data frame sent\nI0313 14:28:02.764417 3572 log.go:172] (0xc000a6a4d0) Data frame received for 5\nI0313 14:28:02.764422 3572 log.go:172] (0xc000928000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0313 14:28:02.764442 3572 log.go:172] (0xc000a6a4d0) Data frame received for 3\nI0313 14:28:02.764448 3572 log.go:172] (0xc0008c2000) (3) Data frame handling\nI0313 14:28:02.764456 3572 log.go:172] (0xc0008c2000) (3) Data frame sent\nI0313 14:28:02.764465 3572 log.go:172] (0xc000a6a4d0) Data frame received for 3\nI0313 14:28:02.764470 3572 log.go:172] (0xc0008c2000) (3) Data frame handling\nI0313 14:28:02.765442 3572 log.go:172] (0xc000a6a4d0) Data frame received for 1\nI0313 14:28:02.765458 3572 log.go:172] (0xc0007a0aa0) (1) Data frame handling\nI0313 14:28:02.765465 3572 log.go:172] (0xc0007a0aa0) (1) Data frame sent\nI0313 14:28:02.765476 3572 log.go:172] (0xc000a6a4d0) (0xc0007a0aa0) Stream removed, broadcasting: 1\nI0313 14:28:02.765490 3572 log.go:172] (0xc000a6a4d0) Go away received\nI0313 14:28:02.765766 3572 log.go:172] (0xc000a6a4d0) (0xc0007a0aa0) Stream removed, broadcasting: 1\nI0313 14:28:02.765782 3572 log.go:172] (0xc000a6a4d0) (0xc0008c2000) Stream removed, broadcasting: 3\nI0313 14:28:02.765788 3572 log.go:172] (0xc000a6a4d0) (0xc000928000) Stream removed, broadcasting: 5\n" Mar 13 14:28:02.769: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 13 14:28:02.769: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 13 14:28:02.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2746 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 13 14:28:02.985: INFO: stderr: "I0313 14:28:02.889550 3593 log.go:172] (0xc00088e790) (0xc000774a00) Create stream\nI0313 14:28:02.889596 3593 log.go:172] (0xc00088e790) (0xc000774a00) Stream added, broadcasting: 1\nI0313 14:28:02.892074 3593 log.go:172] (0xc00088e790) Reply frame received for 1\nI0313 14:28:02.892098 3593 log.go:172] (0xc00088e790) (0xc000774000) Create stream\nI0313 14:28:02.892104 3593 log.go:172] (0xc00088e790) (0xc000774000) Stream added, broadcasting: 3\nI0313 14:28:02.892726 3593 log.go:172] (0xc00088e790) Reply frame received for 3\nI0313 14:28:02.892752 3593 log.go:172] (0xc00088e790) (0xc00062e1e0) Create stream\nI0313 14:28:02.892763 3593 log.go:172] (0xc00088e790) (0xc00062e1e0) Stream added, broadcasting: 5\nI0313 14:28:02.893312 3593 log.go:172] (0xc00088e790) Reply frame received for 5\nI0313 14:28:02.947782 3593 log.go:172] (0xc00088e790) Data frame received for 5\nI0313 14:28:02.947801 3593 log.go:172] (0xc00062e1e0) (5) Data frame handling\nI0313 14:28:02.947813 3593 log.go:172] (0xc00062e1e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0313 14:28:02.979396 3593 log.go:172] (0xc00088e790) Data frame received for 5\nI0313 14:28:02.979442 3593 log.go:172] (0xc00062e1e0) (5) Data frame handling\nI0313 14:28:02.979475 3593 log.go:172] (0xc00088e790) Data frame received for 3\nI0313 14:28:02.979506 3593 log.go:172] (0xc000774000) (3) Data frame handling\nI0313 14:28:02.979523 3593 log.go:172] (0xc000774000) (3) Data frame sent\nI0313 14:28:02.979687 3593 log.go:172] (0xc00088e790) Data frame received for 3\nI0313 14:28:02.979701 3593 log.go:172] (0xc000774000) (3) Data frame handling\nI0313 14:28:02.981479 3593 log.go:172] (0xc00088e790) Data frame received for 1\nI0313 14:28:02.981495 3593 log.go:172] (0xc000774a00) (1) Data frame handling\nI0313 14:28:02.981503 3593 log.go:172] (0xc000774a00) (1) Data frame sent\nI0313 14:28:02.981511 3593 log.go:172] (0xc00088e790) (0xc000774a00) Stream removed, broadcasting: 1\nI0313 14:28:02.981545 3593 log.go:172] (0xc00088e790) Go away received\nI0313 14:28:02.981745 3593 log.go:172] (0xc00088e790) (0xc000774a00) Stream removed, broadcasting: 1\nI0313 14:28:02.981755 3593 log.go:172] (0xc00088e790) (0xc000774000) Stream removed, broadcasting: 3\nI0313 14:28:02.981761 3593 log.go:172] (0xc00088e790) (0xc00062e1e0) Stream removed, broadcasting: 5\n" Mar 13 14:28:02.985: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 13 14:28:02.985: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 13 14:28:02.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2746 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 13 14:28:03.196: INFO: stderr: "I0313 14:28:03.091038 3613 log.go:172] (0xc000104000) (0xc0009b41e0) Create stream\nI0313 14:28:03.091082 3613 log.go:172] (0xc000104000) (0xc0009b41e0) Stream added, broadcasting: 1\nI0313 14:28:03.092788 3613 log.go:172] (0xc000104000) Reply frame received for 1\nI0313 14:28:03.092811 3613 log.go:172] (0xc000104000) (0xc0009b4280) Create stream\nI0313 14:28:03.092817 3613 log.go:172] (0xc000104000) (0xc0009b4280) Stream added, broadcasting: 3\nI0313 14:28:03.093460 3613 log.go:172] (0xc000104000) Reply frame received for 3\nI0313 14:28:03.093483 3613 log.go:172] (0xc000104000) (0xc0006ac1e0) Create stream\nI0313 14:28:03.093492 3613 log.go:172] (0xc000104000) (0xc0006ac1e0) Stream added, broadcasting: 5\nI0313 14:28:03.094326 3613 log.go:172] (0xc000104000) Reply frame received for 5\nI0313 14:28:03.164794 3613 log.go:172] (0xc000104000) Data frame received for 5\nI0313 14:28:03.164817 3613 log.go:172] (0xc0006ac1e0) (5) Data frame handling\nI0313 14:28:03.164830 3613 log.go:172] (0xc0006ac1e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0313 14:28:03.187769 3613 log.go:172] (0xc000104000) Data frame received for 5\nI0313 14:28:03.187794 3613 log.go:172] (0xc0006ac1e0) (5) Data frame handling\nI0313 14:28:03.187827 3613 log.go:172] (0xc000104000) Data frame received for 3\nI0313 14:28:03.187855 3613 log.go:172] (0xc0009b4280) (3) Data frame handling\nI0313 14:28:03.187869 3613 log.go:172] (0xc0009b4280) (3) Data frame sent\nI0313 14:28:03.187879 3613 log.go:172] (0xc000104000) Data frame received for 3\nI0313 14:28:03.187888 3613 log.go:172] (0xc0009b4280) (3) Data frame handling\nI0313 14:28:03.189354 3613 log.go:172] (0xc000104000) Data frame received for 1\nI0313 14:28:03.189367 3613 log.go:172] (0xc0009b41e0) (1) Data frame handling\nI0313 14:28:03.189377 3613 log.go:172] (0xc0009b41e0) (1) Data frame sent\nI0313 14:28:03.189507 3613 log.go:172] (0xc000104000) (0xc0009b41e0) Stream removed, broadcasting: 1\nI0313 14:28:03.189573 3613 log.go:172] (0xc000104000) Go away received\nI0313 14:28:03.189736 3613 log.go:172] (0xc000104000) (0xc0009b41e0) Stream removed, broadcasting: 1\nI0313 14:28:03.189748 3613 log.go:172] (0xc000104000) (0xc0009b4280) Stream removed, broadcasting: 3\nI0313 14:28:03.189752 3613 log.go:172] (0xc000104000) (0xc0006ac1e0) Stream removed, broadcasting: 5\n" Mar 13 14:28:03.196: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 13 14:28:03.196: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 13 14:28:03.196: INFO: Waiting for statefulset status.replicas updated to 0 Mar 13 14:28:03.199: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 13 14:28:13.205: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 13 14:28:13.206: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 13 14:28:13.206: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 13 14:28:13.220: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999651s Mar 13 14:28:14.226: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991010832s Mar 13 14:28:15.231: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985482043s Mar 13 14:28:16.234: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980021241s Mar 13 14:28:17.239: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976820707s Mar 13 14:28:18.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97173319s Mar 13 14:28:19.247: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967408802s Mar 13 14:28:20.252: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963936836s Mar 13 14:28:21.257: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.959004347s Mar 13 14:28:22.261: INFO: Verifying statefulset ss doesn't scale past 3 for another 954.222734ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2746 Mar 13 14:28:23.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:28:23.420: INFO: stderr: "I0313 14:28:23.363504 3633 log.go:172] (0xc000992370) (0xc00055e6e0) Create stream\nI0313 14:28:23.363545 3633 log.go:172] (0xc000992370) (0xc00055e6e0) Stream added, broadcasting: 1\nI0313 14:28:23.365124 3633 log.go:172] (0xc000992370) Reply frame received for 1\nI0313 14:28:23.365151 3633 log.go:172] (0xc000992370) (0xc00055e780) Create stream\nI0313 14:28:23.365158 3633 log.go:172] (0xc000992370) (0xc00055e780) Stream added, broadcasting: 3\nI0313 14:28:23.365777 3633 log.go:172] (0xc000992370) Reply frame received for 3\nI0313 14:28:23.365801 3633 log.go:172] (0xc000992370) (0xc00080e000) Create stream\nI0313 14:28:23.365810 3633 log.go:172] (0xc000992370) (0xc00080e000) Stream added, broadcasting: 5\nI0313 14:28:23.366593 3633 log.go:172] (0xc000992370) Reply frame received for 5\nI0313 14:28:23.416000 3633 log.go:172] (0xc000992370) Data frame received for 3\nI0313 14:28:23.416033 3633 log.go:172] (0xc00055e780) (3) Data frame handling\nI0313 14:28:23.416048 3633 log.go:172] (0xc00055e780) (3) Data frame sent\nI0313 14:28:23.416058 3633 log.go:172] (0xc000992370) Data frame received for 3\nI0313 14:28:23.416067 3633 log.go:172] (0xc00055e780) (3) Data frame handling\nI0313 14:28:23.416126 3633 log.go:172] (0xc000992370) Data frame received for 5\nI0313 14:28:23.416153 3633 log.go:172] (0xc00080e000) (5) Data frame handling\nI0313 14:28:23.416168 3633 log.go:172] (0xc00080e000) (5) Data frame sent\nI0313 14:28:23.416177 3633 log.go:172] (0xc000992370) Data frame received for 5\nI0313 14:28:23.416182 3633 log.go:172] (0xc00080e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0313 14:28:23.417011 3633 log.go:172] (0xc000992370) Data frame received for 1\nI0313 14:28:23.417025 3633 log.go:172] (0xc00055e6e0) (1) Data frame handling\nI0313 14:28:23.417034 3633 log.go:172] (0xc00055e6e0) (1) Data frame sent\nI0313 14:28:23.417044 3633 log.go:172] (0xc000992370) (0xc00055e6e0) Stream removed, broadcasting: 1\nI0313 14:28:23.417109 3633 log.go:172] (0xc000992370) Go away received\nI0313 14:28:23.417471 3633 log.go:172] (0xc000992370) (0xc00055e6e0) Stream removed, broadcasting: 1\nI0313 14:28:23.417488 3633 log.go:172] (0xc000992370) (0xc00055e780) Stream removed, broadcasting: 3\nI0313 14:28:23.417495 3633 log.go:172] (0xc000992370) (0xc00080e000) Stream removed, broadcasting: 5\n" Mar 13 14:28:23.420: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 13 14:28:23.420: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 13 14:28:23.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:28:23.580: INFO: stderr: "I0313 14:28:23.523460 3653 log.go:172] (0xc000a1e370) (0xc000976640) Create stream\nI0313 14:28:23.523499 3653 log.go:172] (0xc000a1e370) (0xc000976640) Stream added, broadcasting: 1\nI0313 14:28:23.527432 3653 log.go:172] (0xc000a1e370) Reply frame received for 1\nI0313 14:28:23.527466 3653 log.go:172] (0xc000a1e370) (0xc00080e000) Create stream\nI0313 14:28:23.527476 3653 log.go:172] (0xc000a1e370) (0xc00080e000) Stream added, broadcasting: 3\nI0313 14:28:23.529719 3653 log.go:172] (0xc000a1e370) Reply frame received for 3\nI0313 14:28:23.529750 3653 log.go:172] (0xc000a1e370) (0xc0009766e0) Create stream\nI0313 14:28:23.529758 3653 log.go:172] (0xc000a1e370) (0xc0009766e0) Stream added, broadcasting: 5\nI0313 14:28:23.530527 3653 log.go:172] (0xc000a1e370) Reply frame received for 5\nI0313 14:28:23.576256 3653 log.go:172] (0xc000a1e370) Data frame received for 5\nI0313 14:28:23.576280 3653 log.go:172] (0xc0009766e0) (5) Data frame handling\nI0313 14:28:23.576288 3653 log.go:172] (0xc0009766e0) (5) Data frame sent\nI0313 14:28:23.576295 3653 log.go:172] (0xc000a1e370) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0313 14:28:23.576299 3653 log.go:172] (0xc0009766e0) (5) Data frame handling\nI0313 14:28:23.576375 3653 log.go:172] (0xc000a1e370) Data frame received for 3\nI0313 14:28:23.576405 3653 log.go:172] (0xc00080e000) (3) Data frame handling\nI0313 14:28:23.576427 3653 log.go:172] (0xc00080e000) (3) Data frame sent\nI0313 14:28:23.576440 3653 log.go:172] (0xc000a1e370) Data frame received for 3\nI0313 14:28:23.576451 3653 log.go:172] (0xc00080e000) (3) Data frame handling\nI0313 14:28:23.577209 3653 log.go:172] (0xc000a1e370) Data frame received for 1\nI0313 14:28:23.577221 3653 log.go:172] (0xc000976640) (1) Data frame handling\nI0313 14:28:23.577227 3653 log.go:172] (0xc000976640) (1) Data frame sent\nI0313 14:28:23.577234 3653 log.go:172] (0xc000a1e370) (0xc000976640) Stream removed, broadcasting: 1\nI0313 14:28:23.577257 3653 log.go:172] (0xc000a1e370) Go away received\nI0313 14:28:23.577497 3653 log.go:172] (0xc000a1e370) (0xc000976640) Stream removed, broadcasting: 1\nI0313 14:28:23.577510 3653 log.go:172] (0xc000a1e370) (0xc00080e000) Stream removed, broadcasting: 3\nI0313 14:28:23.577516 3653 log.go:172] (0xc000a1e370) (0xc0009766e0) Stream removed, broadcasting: 5\n" Mar 13 14:28:23.580: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 13 14:28:23.580: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 13 14:28:23.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2746 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 13 14:28:23.749: INFO: stderr: "I0313 14:28:23.683362 3673 log.go:172] (0xc000116dc0) (0xc0002d0820) Create stream\nI0313 14:28:23.683396 3673 log.go:172] (0xc000116dc0) (0xc0002d0820) Stream added, broadcasting: 1\nI0313 14:28:23.684843 3673 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0313 14:28:23.684871 3673 log.go:172] (0xc000116dc0) (0xc000912000) Create stream\nI0313 14:28:23.684880 3673 log.go:172] (0xc000116dc0) (0xc000912000) Stream added, broadcasting: 3\nI0313 14:28:23.685878 3673 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0313 14:28:23.685931 3673 log.go:172] (0xc000116dc0) (0xc00084a000) Create stream\nI0313 14:28:23.685963 3673 log.go:172] (0xc000116dc0) (0xc00084a000) Stream added, broadcasting: 5\nI0313 14:28:23.687053 3673 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0313 14:28:23.741831 3673 log.go:172] (0xc000116dc0) Data frame received for 5\nI0313 14:28:23.741863 3673 log.go:172] (0xc00084a000) (5) Data frame handling\nI0313 14:28:23.741923 3673 log.go:172] (0xc00084a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0313 14:28:23.741962 3673 log.go:172] (0xc000116dc0) Data frame received for 5\nI0313 14:28:23.741969 3673 log.go:172] (0xc00084a000) (5) Data frame handling\nI0313 14:28:23.741989 3673 log.go:172] (0xc000116dc0) Data frame received for 3\nI0313 14:28:23.741997 3673 log.go:172] (0xc000912000) (3) Data frame handling\nI0313 14:28:23.742006 3673 log.go:172] (0xc000912000) (3) Data frame sent\nI0313 14:28:23.742012 3673 log.go:172] (0xc000116dc0) Data frame received for 3\nI0313 14:28:23.742018 3673 log.go:172] (0xc000912000) (3) Data frame handling\nI0313 14:28:23.746233 3673 log.go:172] (0xc000116dc0) Data frame received for 1\nI0313 14:28:23.746263 3673 log.go:172] (0xc0002d0820) (1) Data frame handling\nI0313 14:28:23.746274 3673 log.go:172] (0xc0002d0820) (1) Data frame sent\nI0313 14:28:23.746287 3673 log.go:172] (0xc000116dc0) (0xc0002d0820) Stream removed, broadcasting: 1\nI0313 14:28:23.746302 3673 log.go:172] (0xc000116dc0) Go away received\nI0313 14:28:23.746677 3673 log.go:172] (0xc000116dc0) (0xc0002d0820) Stream removed, broadcasting: 1\nI0313 14:28:23.746688 3673 log.go:172] (0xc000116dc0) (0xc000912000) Stream removed, broadcasting: 3\nI0313 14:28:23.746693 3673 log.go:172] (0xc000116dc0) (0xc00084a000) Stream removed, broadcasting: 5\n" Mar 13 14:28:23.749: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 13 14:28:23.749: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 13 14:28:23.749: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 13 14:28:43.796: INFO: Deleting all statefulset in ns statefulset-2746 Mar 13 14:28:43.799: INFO: Scaling statefulset ss to 0 Mar 13 14:28:43.806: INFO: Waiting for statefulset status.replicas updated to 0 Mar 13 14:28:43.808: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:28:43.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2746" for this suite. Mar 13 14:28:49.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:28:49.912: INFO: namespace statefulset-2746 deletion completed in 6.087378048s • [SLOW TEST:89.902 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:28:49.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Mar 13 14:28:49.956: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix079322316/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:28:50.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8381" for this suite. Mar 13 14:28:56.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:28:56.108: INFO: namespace kubectl-8381 deletion completed in 6.072624353s • [SLOW TEST:6.195 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:28:56.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 13 14:28:56.180: INFO: PodSpec: initContainers in spec.initContainers Mar 13 14:29:40.665: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c9c71791-9224-4f81-a9fd-ec3dfc96be55", GenerateName:"", Namespace:"init-container-7012", SelfLink:"/api/v1/namespaces/init-container-7012/pods/pod-init-c9c71791-9224-4f81-a9fd-ec3dfc96be55", UID:"3a38ac88-ff97-4d57-9501-d680441ec9b5", ResourceVersion:"921670", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719706536, loc:(*time.Location)(0x7ea78c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"180069199"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-llc9h", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0013567c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-llc9h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-llc9h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-llc9h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002622c68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00180f1a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002622cf0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002622d10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002622d18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002622d1c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719706536, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719706536, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719706536, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719706536, loc:(*time.Location)(0x7ea78c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.7", PodIP:"10.244.2.193", StartTime:(*v1.Time)(0xc0013168e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025ad500)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025ad570)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://e2dead0d086a5259b68361b528d3a570dc07556a603b984cef652f5e79427f4d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001316920), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001316900), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:29:40.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7012" for this suite. Mar 13 14:30:04.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:30:04.780: INFO: namespace init-container-7012 deletion completed in 24.066293308s • [SLOW TEST:68.672 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:30:04.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 14:30:04.880: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98403570-ccf9-418d-98f6-a7eef418ce20" in namespace "projected-9918" to be "success or failure" Mar 13 14:30:04.884: INFO: Pod "downwardapi-volume-98403570-ccf9-418d-98f6-a7eef418ce20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194328ms Mar 13 14:30:06.886: INFO: Pod "downwardapi-volume-98403570-ccf9-418d-98f6-a7eef418ce20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006627423s STEP: Saw pod success Mar 13 14:30:06.886: INFO: Pod "downwardapi-volume-98403570-ccf9-418d-98f6-a7eef418ce20" satisfied condition "success or failure" Mar 13 14:30:06.888: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-98403570-ccf9-418d-98f6-a7eef418ce20 container client-container: STEP: delete the pod Mar 13 14:30:06.916: INFO: Waiting for pod downwardapi-volume-98403570-ccf9-418d-98f6-a7eef418ce20 to disappear Mar 13 14:30:06.920: INFO: Pod downwardapi-volume-98403570-ccf9-418d-98f6-a7eef418ce20 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:30:06.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9918" for this suite. Mar 13 14:30:12.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:30:12.977: INFO: namespace projected-9918 deletion completed in 6.054415647s • [SLOW TEST:8.197 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:30:12.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 13 14:30:13.038: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ca64939-bf0c-46e5-aa3c-698a367aa8bc" in namespace "projected-2102" to be "success or failure" Mar 13 14:30:13.058: INFO: Pod "downwardapi-volume-9ca64939-bf0c-46e5-aa3c-698a367aa8bc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.069375ms Mar 13 14:30:15.941: INFO: Pod "downwardapi-volume-9ca64939-bf0c-46e5-aa3c-698a367aa8bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.90295433s STEP: Saw pod success Mar 13 14:30:15.941: INFO: Pod "downwardapi-volume-9ca64939-bf0c-46e5-aa3c-698a367aa8bc" satisfied condition "success or failure" Mar 13 14:30:15.943: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9ca64939-bf0c-46e5-aa3c-698a367aa8bc container client-container: STEP: delete the pod Mar 13 14:30:16.036: INFO: Waiting for pod downwardapi-volume-9ca64939-bf0c-46e5-aa3c-698a367aa8bc to disappear Mar 13 14:30:16.067: INFO: Pod downwardapi-volume-9ca64939-bf0c-46e5-aa3c-698a367aa8bc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:30:16.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2102" for this suite. Mar 13 14:30:22.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:30:22.129: INFO: namespace projected-2102 deletion completed in 6.059571019s • [SLOW TEST:9.152 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:30:22.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 13 14:30:22.174: INFO: Waiting up to 5m0s for pod "downward-api-17bec704-d2e0-4628-9b7a-20789bd3d9d7" in namespace "downward-api-355" to be "success or failure" Mar 13 14:30:22.179: INFO: Pod "downward-api-17bec704-d2e0-4628-9b7a-20789bd3d9d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.765903ms Mar 13 14:30:24.182: INFO: Pod "downward-api-17bec704-d2e0-4628-9b7a-20789bd3d9d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007326042s STEP: Saw pod success Mar 13 14:30:24.182: INFO: Pod "downward-api-17bec704-d2e0-4628-9b7a-20789bd3d9d7" satisfied condition "success or failure" Mar 13 14:30:24.183: INFO: Trying to get logs from node iruya-worker pod downward-api-17bec704-d2e0-4628-9b7a-20789bd3d9d7 container dapi-container: STEP: delete the pod Mar 13 14:30:24.210: INFO: Waiting for pod downward-api-17bec704-d2e0-4628-9b7a-20789bd3d9d7 to disappear Mar 13 14:30:24.215: INFO: Pod downward-api-17bec704-d2e0-4628-9b7a-20789bd3d9d7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:30:24.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-355" for this suite. Mar 13 14:30:30.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:30:30.276: INFO: namespace downward-api-355 deletion completed in 6.059096267s • [SLOW TEST:8.147 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:30:30.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Mar 13 14:30:30.318: INFO: Waiting up to 5m0s for pod "client-containers-55e8945d-32a5-4ef5-bdce-b7389a616bd2" in namespace "containers-5525" to be "success or failure" Mar 13 14:30:30.336: INFO: Pod "client-containers-55e8945d-32a5-4ef5-bdce-b7389a616bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.711219ms Mar 13 14:30:32.338: INFO: Pod "client-containers-55e8945d-32a5-4ef5-bdce-b7389a616bd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019962179s STEP: Saw pod success Mar 13 14:30:32.338: INFO: Pod "client-containers-55e8945d-32a5-4ef5-bdce-b7389a616bd2" satisfied condition "success or failure" Mar 13 14:30:32.340: INFO: Trying to get logs from node iruya-worker2 pod client-containers-55e8945d-32a5-4ef5-bdce-b7389a616bd2 container test-container: STEP: delete the pod Mar 13 14:30:33.661: INFO: Waiting for pod client-containers-55e8945d-32a5-4ef5-bdce-b7389a616bd2 to disappear Mar 13 14:30:33.797: INFO: Pod client-containers-55e8945d-32a5-4ef5-bdce-b7389a616bd2 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:30:33.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5525" for this suite. Mar 13 14:30:41.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:30:41.866: INFO: namespace containers-5525 deletion completed in 8.066070051s • [SLOW TEST:11.590 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:30:41.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-2e4b256e-c210-4d2e-838d-84083bec3863 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-2e4b256e-c210-4d2e-838d-84083bec3863 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:30:45.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1606" for this suite. Mar 13 14:31:08.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:31:08.074: INFO: namespace configmap-1606 deletion completed in 22.087893322s • [SLOW TEST:26.208 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:31:08.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 13 14:31:08.107: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 13 14:31:08.141: INFO: Waiting for terminating namespaces to be deleted... Mar 13 14:31:08.142: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 13 14:31:08.145: INFO: kindnet-9jdkr from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 13 14:31:08.145: INFO: Container kindnet-cni ready: true, restart count 0 Mar 13 14:31:08.145: INFO: kube-proxy-nf96r from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 13 14:31:08.145: INFO: Container kube-proxy ready: true, restart count 0 Mar 13 14:31:08.145: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 13 14:31:08.147: INFO: kindnet-d7zdc from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 13 14:31:08.147: INFO: Container kindnet-cni ready: true, restart count 0 Mar 13 14:31:08.147: INFO: kube-proxy-clpmt from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 13 14:31:08.147: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ac2963c5-ed4a-47e8-97c5-ac5b3a950fc0 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ac2963c5-ed4a-47e8-97c5-ac5b3a950fc0 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-ac2963c5-ed4a-47e8-97c5-ac5b3a950fc0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:31:14.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9418" for this suite. Mar 13 14:31:26.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:31:26.381: INFO: namespace sched-pred-9418 deletion completed in 12.066276512s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:18.307 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 13 14:31:26.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2d31b2dd-6bc6-4cf8-a33b-758c0bb2d603 STEP: Creating a pod to test consume secrets Mar 13 14:31:26.457: INFO: Waiting up to 5m0s for pod "pod-secrets-b997f975-5cd4-4268-9d5b-fa1a2e1ecb5d" in namespace "secrets-426" to be "success or failure" Mar 13 14:31:26.493: INFO: Pod "pod-secrets-b997f975-5cd4-4268-9d5b-fa1a2e1ecb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 36.079216ms Mar 13 14:31:28.495: INFO: Pod "pod-secrets-b997f975-5cd4-4268-9d5b-fa1a2e1ecb5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.037989663s STEP: Saw pod success Mar 13 14:31:28.495: INFO: Pod "pod-secrets-b997f975-5cd4-4268-9d5b-fa1a2e1ecb5d" satisfied condition "success or failure" Mar 13 14:31:28.496: INFO: Trying to get logs from node iruya-worker pod pod-secrets-b997f975-5cd4-4268-9d5b-fa1a2e1ecb5d container secret-volume-test: STEP: delete the pod Mar 13 14:31:28.519: INFO: Waiting for pod pod-secrets-b997f975-5cd4-4268-9d5b-fa1a2e1ecb5d to disappear Mar 13 14:31:28.526: INFO: Pod pod-secrets-b997f975-5cd4-4268-9d5b-fa1a2e1ecb5d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 13 14:31:28.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-426" for this suite. Mar 13 14:31:34.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 13 14:31:34.580: INFO: namespace secrets-426 deletion completed in 6.052243134s • [SLOW TEST:8.199 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMar 13 14:31:34.581: INFO: Running AfterSuite actions on all nodes Mar 13 14:31:34.581: INFO: Running AfterSuite actions on node 1 Mar 13 14:31:34.581: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 5769.244 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS