I0524 10:50:01.624478 6 e2e.go:224] Starting e2e run "5142e99c-9dac-11ea-9618-0242ac110016" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590317401 - Will randomize all specs Will run 201 of 2164 specs May 24 10:50:01.811: INFO: >>> kubeConfig: /root/.kube/config May 24 10:50:01.816: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 24 10:50:01.834: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 24 10:50:01.864: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 24 10:50:01.864: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 24 10:50:01.864: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 24 10:50:01.877: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 24 10:50:01.877: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 24 10:50:01.877: INFO: e2e test version: v1.13.12 May 24 10:50:01.878: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:50:01.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment May 24 10:50:02.120: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 10:50:02.122: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 24 10:50:02.167: INFO: Pod name sample-pod: Found 0 pods out of 1 May 24 10:50:07.344: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 24 10:50:09.772: INFO: Creating deployment "test-rolling-update-deployment" May 24 10:50:09.934: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 24 10:50:09.976: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 24 10:50:12.058: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 24 10:50:12.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 10:50:14.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 10:50:16.633: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 10:50:18.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725914210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 10:50:20.192: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 24 10:50:20.646: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-2tl9z,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2tl9z/deployments/test-rolling-update-deployment,UID:5666d8bf-9dac-11ea-99e8-0242ac110002,ResourceVersion:12254204,Generation:1,CreationTimestamp:2020-05-24 10:50:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-24 10:50:10 +0000 UTC 2020-05-24 10:50:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-24 10:50:19 +0000 UTC 2020-05-24 10:50:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 24 10:50:20.650: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-2tl9z,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2tl9z/replicasets/test-rolling-update-deployment-75db98fb4c,UID:56865f7a-9dac-11ea-99e8-0242ac110002,ResourceVersion:12254194,Generation:1,CreationTimestamp:2020-05-24 10:50:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 5666d8bf-9dac-11ea-99e8-0242ac110002 0xc001796e07 0xc001796e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 24 10:50:20.650: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 24 10:50:20.650: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-2tl9z,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2tl9z/replicasets/test-rolling-update-controller,UID:51d7825e-9dac-11ea-99e8-0242ac110002,ResourceVersion:12254202,Generation:2,CreationTimestamp:2020-05-24 10:50:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 5666d8bf-9dac-11ea-99e8-0242ac110002 0xc001796d47 0xc001796d48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 24 10:50:20.653: INFO: Pod "test-rolling-update-deployment-75db98fb4c-rzrr9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-rzrr9,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-2tl9z,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2tl9z/pods/test-rolling-update-deployment-75db98fb4c-rzrr9,UID:5689b389-9dac-11ea-99e8-0242ac110002,ResourceVersion:12254193,Generation:0,CreationTimestamp:2020-05-24 10:50:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 56865f7a-9dac-11ea-99e8-0242ac110002 0xc000faaec7 0xc000faaec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fkxzd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fkxzd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-fkxzd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000faaf40} {node.kubernetes.io/unreachable Exists NoExecute 0xc000faaf60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:50:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:50:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:50:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:50:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.229,StartTime:2020-05-24 10:50:10 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-24 10:50:18 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://e5f477f4e3965455822cda4fcd2b57dda285b6aa50f18f4d5411112a2930a4c6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:50:20.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-2tl9z" for this suite. May 24 10:50:30.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:50:31.003: INFO: namespace: e2e-tests-deployment-2tl9z, resource: bindings, ignored listing per whitelist May 24 10:50:31.017: INFO: namespace e2e-tests-deployment-2tl9z deletion completed in 10.361892385s • [SLOW TEST:29.139 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:50:31.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-ndfkk May 24 10:50:35.916: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-ndfkk STEP: checking the pod's current state and verifying that restartCount is present May 24 10:50:35.919: INFO: Initial restart count of pod liveness-http is 0 May 24 10:51:02.298: INFO: Restart count of pod e2e-tests-container-probe-ndfkk/liveness-http is now 1 (26.379302482s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:51:02.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-ndfkk" for this suite. May 24 10:51:08.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:51:08.435: INFO: namespace: e2e-tests-container-probe-ndfkk, resource: bindings, ignored listing per whitelist May 24 10:51:08.462: INFO: namespace e2e-tests-container-probe-ndfkk deletion completed in 6.110220894s • [SLOW TEST:37.444 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:51:08.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 24 10:51:08.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5fwj5' May 24 10:51:11.403: INFO: stderr: "" May 24 10:51:11.403: INFO: stdout: "pod/pause created\n" May 24 10:51:11.403: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 24 10:51:11.403: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-5fwj5" to be "running and ready" May 24 10:51:11.413: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.472953ms May 24 10:51:13.416: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01288917s May 24 10:51:15.421: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017283155s May 24 10:51:17.492: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088564632s May 24 10:51:19.496: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092583038s May 24 10:51:21.756: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.35246887s May 24 10:51:23.773: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.370092439s May 24 10:51:25.777: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 14.373426407s May 24 10:51:25.777: INFO: Pod "pause" satisfied condition "running and ready" May 24 10:51:25.777: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 24 10:51:25.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-5fwj5' May 24 10:51:25.867: INFO: stderr: "" May 24 10:51:25.867: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 24 10:51:25.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-5fwj5' May 24 10:51:25.949: INFO: stderr: "" May 24 10:51:25.949: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 14s testing-label-value\n" STEP: removing the label testing-label of a pod May 24 10:51:25.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-5fwj5' May 24 10:51:26.042: INFO: stderr: "" May 24 10:51:26.042: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 24 10:51:26.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-5fwj5' May 24 10:51:26.129: INFO: stderr: "" May 24 10:51:26.129: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 15s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 24 10:51:26.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-5fwj5' May 24 10:51:26.220: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 10:51:26.220: INFO: stdout: "pod \"pause\" force deleted\n" May 24 10:51:26.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-5fwj5' May 24 10:51:26.319: INFO: stderr: "No resources found.\n" May 24 10:51:26.319: INFO: stdout: "" May 24 10:51:26.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-5fwj5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 24 10:51:26.524: INFO: stderr: "" May 24 10:51:26.524: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:51:26.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5fwj5" for this suite. May 24 10:51:33.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:51:33.553: INFO: namespace: e2e-tests-kubectl-5fwj5, resource: bindings, ignored listing per whitelist May 24 10:51:33.556: INFO: namespace e2e-tests-kubectl-5fwj5 deletion completed in 7.027406708s • [SLOW TEST:25.094 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:51:33.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 10:51:33.678: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016" in namespace "e2e-tests-downward-api-wltmm" to be "success or failure" May 24 10:51:33.687: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 8.675354ms May 24 10:51:35.690: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011182064s May 24 10:51:39.686: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 6.00741213s May 24 10:51:41.689: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01052431s May 24 10:51:44.121: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 10.442176451s May 24 10:51:46.124: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 12.446101185s May 24 10:51:48.128: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 14.44950337s May 24 10:51:50.842: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 17.163467254s May 24 10:51:53.170: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 19.491615341s May 24 10:51:55.174: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 21.495525834s May 24 10:51:57.177: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 23.499022275s May 24 10:52:02.002: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 28.323532679s May 24 10:52:04.432: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 30.75389609s May 24 10:52:06.436: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 32.757539549s May 24 10:52:10.359: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 36.680349818s May 24 10:52:12.364: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 38.685267605s May 24 10:52:14.407: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 40.728477776s May 24 10:52:16.577: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 42.898963177s May 24 10:52:18.581: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 44.902555556s May 24 10:52:20.584: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 46.905326427s May 24 10:52:22.587: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.909077727s STEP: Saw pod success May 24 10:52:22.587: INFO: Pod "downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 10:52:22.590: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 10:52:24.040: INFO: Waiting for pod downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016 to disappear May 24 10:52:24.235: INFO: Pod downwardapi-volume-8865e107-9dac-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:52:24.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wltmm" for this suite. May 24 10:52:32.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:52:32.607: INFO: namespace: e2e-tests-downward-api-wltmm, resource: bindings, ignored listing per whitelist May 24 10:52:32.644: INFO: namespace e2e-tests-downward-api-wltmm deletion completed in 8.406979334s • [SLOW TEST:59.088 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:52:32.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 24 10:52:33.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 24 10:52:33.300: INFO: stderr: "" May 24 10:52:33.300: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:52:33.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-v6dd9" for this suite. May 24 10:52:39.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:52:39.384: INFO: namespace: e2e-tests-kubectl-v6dd9, resource: bindings, ignored listing per whitelist May 24 10:52:39.389: INFO: namespace e2e-tests-kubectl-v6dd9 deletion completed in 6.085598765s • [SLOW TEST:6.744 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:52:39.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:52:39.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-t2rm8" for this suite. May 24 10:52:45.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:52:45.573: INFO: namespace: e2e-tests-services-t2rm8, resource: bindings, ignored listing per whitelist May 24 10:52:45.608: INFO: namespace e2e-tests-services-t2rm8 deletion completed in 6.070572442s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.218 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:52:45.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 10:52:45.750: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b35e43c6-9dac-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-tx952" to be "success or failure" May 24 10:52:45.753: INFO: Pod "downwardapi-volume-b35e43c6-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.483896ms May 24 10:52:47.755: INFO: Pod "downwardapi-volume-b35e43c6-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004968675s May 24 10:52:49.759: INFO: Pod "downwardapi-volume-b35e43c6-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008227372s May 24 10:52:51.762: INFO: Pod "downwardapi-volume-b35e43c6-9dac-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011648011s STEP: Saw pod success May 24 10:52:51.762: INFO: Pod "downwardapi-volume-b35e43c6-9dac-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 10:52:51.765: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b35e43c6-9dac-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 10:52:51.814: INFO: Waiting for pod downwardapi-volume-b35e43c6-9dac-11ea-9618-0242ac110016 to disappear May 24 10:52:51.820: INFO: Pod downwardapi-volume-b35e43c6-9dac-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:52:51.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tx952" for this suite. May 24 10:52:57.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:52:57.884: INFO: namespace: e2e-tests-projected-tx952, resource: bindings, ignored listing per whitelist May 24 10:52:57.886: INFO: namespace e2e-tests-projected-tx952 deletion completed in 6.063112165s • [SLOW TEST:12.278 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:52:57.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-baad93a1-9dac-11ea-9618-0242ac110016 STEP: Creating secret with name secret-projected-all-test-volume-baad937c-9dac-11ea-9618-0242ac110016 STEP: Creating a pod to test Check all projections for projected volume plugin May 24 10:52:58.039: INFO: Waiting up to 5m0s for pod "projected-volume-baad9339-9dac-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-9s7vq" to be "success or failure" May 24 10:52:58.073: INFO: Pod "projected-volume-baad9339-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 34.21247ms May 24 10:53:00.077: INFO: Pod "projected-volume-baad9339-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038096171s May 24 10:53:02.104: INFO: Pod "projected-volume-baad9339-9dac-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 4.065144383s May 24 10:53:04.107: INFO: Pod "projected-volume-baad9339-9dac-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068234337s STEP: Saw pod success May 24 10:53:04.107: INFO: Pod "projected-volume-baad9339-9dac-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 10:53:04.110: INFO: Trying to get logs from node hunter-worker pod projected-volume-baad9339-9dac-11ea-9618-0242ac110016 container projected-all-volume-test: STEP: delete the pod May 24 10:53:04.213: INFO: Waiting for pod projected-volume-baad9339-9dac-11ea-9618-0242ac110016 to disappear May 24 10:53:04.229: INFO: Pod projected-volume-baad9339-9dac-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:53:04.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9s7vq" for this suite. May 24 10:53:10.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:53:10.284: INFO: namespace: e2e-tests-projected-9s7vq, resource: bindings, ignored listing per whitelist May 24 10:53:10.302: INFO: namespace e2e-tests-projected-9s7vq deletion completed in 6.070012903s • [SLOW TEST:12.416 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:53:10.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 24 10:53:48.489: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 10:53:48.496: INFO: Pod pod-with-poststart-http-hook still exists May 24 10:53:50.496: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 10:53:50.500: INFO: Pod pod-with-poststart-http-hook still exists May 24 10:53:52.496: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 10:53:52.499: INFO: Pod pod-with-poststart-http-hook still exists May 24 10:53:54.496: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 10:53:54.501: INFO: Pod pod-with-poststart-http-hook still exists May 24 10:53:56.496: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 10:53:56.499: INFO: Pod pod-with-poststart-http-hook still exists May 24 10:53:58.496: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 10:53:58.626: INFO: Pod pod-with-poststart-http-hook still exists May 24 10:54:00.496: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 10:54:00.545: INFO: Pod pod-with-poststart-http-hook still exists May 24 10:54:02.496: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 10:54:02.518: INFO: Pod pod-with-poststart-http-hook still exists May 24 10:54:04.496: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 10:54:04.500: INFO: Pod pod-with-poststart-http-hook still exists May 24 10:54:06.496: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 10:54:06.499: INFO: Pod pod-with-poststart-http-hook still exists May 24 10:54:08.496: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 10:54:08.500: INFO: Pod pod-with-poststart-http-hook still exists May 24 10:54:10.496: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 10:54:10.500: INFO: Pod pod-with-poststart-http-hook still exists May 24 10:54:12.496: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 10:54:12.500: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:54:12.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-24qz4" for this suite. May 24 10:54:34.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:54:34.559: INFO: namespace: e2e-tests-container-lifecycle-hook-24qz4, resource: bindings, ignored listing per whitelist May 24 10:54:34.583: INFO: namespace e2e-tests-container-lifecycle-hook-24qz4 deletion completed in 22.079462683s • [SLOW TEST:84.280 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:54:34.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-f44c2793-9dac-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 10:54:34.719: INFO: Waiting up to 5m0s for pod "pod-configmaps-f451ceb4-9dac-11ea-9618-0242ac110016" in namespace "e2e-tests-configmap-h5shx" to be "success or failure" May 24 10:54:34.774: INFO: Pod "pod-configmaps-f451ceb4-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 54.734381ms May 24 10:54:36.778: INFO: Pod "pod-configmaps-f451ceb4-9dac-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058974041s May 24 10:54:38.781: INFO: Pod "pod-configmaps-f451ceb4-9dac-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 4.061150377s May 24 10:54:40.784: INFO: Pod "pod-configmaps-f451ceb4-9dac-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064466247s STEP: Saw pod success May 24 10:54:40.784: INFO: Pod "pod-configmaps-f451ceb4-9dac-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 10:54:40.787: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-f451ceb4-9dac-11ea-9618-0242ac110016 container configmap-volume-test: STEP: delete the pod May 24 10:54:40.823: INFO: Waiting for pod pod-configmaps-f451ceb4-9dac-11ea-9618-0242ac110016 to disappear May 24 10:54:40.832: INFO: Pod pod-configmaps-f451ceb4-9dac-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:54:40.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-h5shx" for this suite. May 24 10:54:46.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:54:47.002: INFO: namespace: e2e-tests-configmap-h5shx, resource: bindings, ignored listing per whitelist May 24 10:54:47.007: INFO: namespace e2e-tests-configmap-h5shx deletion completed in 6.171798119s • [SLOW TEST:12.424 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:54:47.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:54:53.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-4n42g" for this suite. May 24 10:55:33.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:55:33.203: INFO: namespace: e2e-tests-kubelet-test-4n42g, resource: bindings, ignored listing per whitelist May 24 10:55:33.261: INFO: namespace e2e-tests-kubelet-test-4n42g deletion completed in 40.101330348s • [SLOW TEST:46.255 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:55:33.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-174a7528-9dad-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 10:55:33.421: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-174c6745-9dad-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-j5qz8" to be "success or failure" May 24 10:55:33.443: INFO: Pod "pod-projected-secrets-174c6745-9dad-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 21.737234ms May 24 10:55:35.447: INFO: Pod "pod-projected-secrets-174c6745-9dad-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025953886s May 24 10:55:37.451: INFO: Pod "pod-projected-secrets-174c6745-9dad-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029960391s STEP: Saw pod success May 24 10:55:37.451: INFO: Pod "pod-projected-secrets-174c6745-9dad-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 10:55:37.454: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-174c6745-9dad-11ea-9618-0242ac110016 container projected-secret-volume-test: STEP: delete the pod May 24 10:55:37.473: INFO: Waiting for pod pod-projected-secrets-174c6745-9dad-11ea-9618-0242ac110016 to disappear May 24 10:55:37.478: INFO: Pod pod-projected-secrets-174c6745-9dad-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:55:37.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j5qz8" for this suite. May 24 10:55:43.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:55:43.552: INFO: namespace: e2e-tests-projected-j5qz8, resource: bindings, ignored listing per whitelist May 24 10:55:43.666: INFO: namespace e2e-tests-projected-j5qz8 deletion completed in 6.185336167s • [SLOW TEST:10.404 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:55:43.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 10:55:43.798: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1d7dec92-9dad-11ea-9618-0242ac110016" in namespace "e2e-tests-downward-api-58fjs" to be "success or failure" May 24 10:55:43.826: INFO: Pod "downwardapi-volume-1d7dec92-9dad-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 28.384508ms May 24 10:55:45.831: INFO: Pod "downwardapi-volume-1d7dec92-9dad-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033275386s May 24 10:55:47.836: INFO: Pod "downwardapi-volume-1d7dec92-9dad-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037947829s STEP: Saw pod success May 24 10:55:47.836: INFO: Pod "downwardapi-volume-1d7dec92-9dad-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 10:55:47.838: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1d7dec92-9dad-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 10:55:47.854: INFO: Waiting for pod downwardapi-volume-1d7dec92-9dad-11ea-9618-0242ac110016 to disappear May 24 10:55:47.858: INFO: Pod downwardapi-volume-1d7dec92-9dad-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:55:47.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-58fjs" for this suite. May 24 10:55:53.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:55:53.907: INFO: namespace: e2e-tests-downward-api-58fjs, resource: bindings, ignored listing per whitelist May 24 10:55:53.952: INFO: namespace e2e-tests-downward-api-58fjs deletion completed in 6.090126257s • [SLOW TEST:10.286 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:55:53.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 24 10:55:54.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-qxgzk' May 24 10:55:54.157: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 24 10:55:54.157: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 24 10:55:54.166: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 24 10:55:54.187: INFO: scanned /root for discovery docs: May 24 10:55:54.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-qxgzk' May 24 10:56:10.064: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 24 10:56:10.064: INFO: stdout: "Created e2e-test-nginx-rc-73077ce3a894cfed384598196855d019\nScaling up e2e-test-nginx-rc-73077ce3a894cfed384598196855d019 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-73077ce3a894cfed384598196855d019 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-73077ce3a894cfed384598196855d019 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 24 10:56:10.064: INFO: stdout: "Created e2e-test-nginx-rc-73077ce3a894cfed384598196855d019\nScaling up e2e-test-nginx-rc-73077ce3a894cfed384598196855d019 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-73077ce3a894cfed384598196855d019 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-73077ce3a894cfed384598196855d019 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 24 10:56:10.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-qxgzk' May 24 10:56:10.165: INFO: stderr: "" May 24 10:56:10.165: INFO: stdout: "e2e-test-nginx-rc-73077ce3a894cfed384598196855d019-rrm4l e2e-test-nginx-rc-xxrvq " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 24 10:56:15.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-qxgzk' May 24 10:56:15.271: INFO: stderr: "" May 24 10:56:15.271: INFO: stdout: "e2e-test-nginx-rc-73077ce3a894cfed384598196855d019-rrm4l " May 24 10:56:15.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-73077ce3a894cfed384598196855d019-rrm4l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qxgzk' May 24 10:56:15.371: INFO: stderr: "" May 24 10:56:15.371: INFO: stdout: "true" May 24 10:56:15.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-73077ce3a894cfed384598196855d019-rrm4l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qxgzk' May 24 10:56:15.495: INFO: stderr: "" May 24 10:56:15.495: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 24 10:56:15.495: INFO: e2e-test-nginx-rc-73077ce3a894cfed384598196855d019-rrm4l is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 24 10:56:15.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-qxgzk' May 24 10:56:15.606: INFO: stderr: "" May 24 10:56:15.606: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:56:15.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qxgzk" for this suite. May 24 10:56:37.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:56:37.660: INFO: namespace: e2e-tests-kubectl-qxgzk, resource: bindings, ignored listing per whitelist May 24 10:56:37.723: INFO: namespace e2e-tests-kubectl-qxgzk deletion completed in 22.113336811s • [SLOW TEST:43.771 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:56:37.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 10:56:37.913: INFO: Creating deployment "nginx-deployment" May 24 10:56:37.917: INFO: Waiting for observed generation 1 May 24 10:56:40.141: INFO: Waiting for all required pods to come up May 24 10:56:40.146: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 24 10:56:50.800: INFO: Waiting for deployment "nginx-deployment" to complete May 24 10:56:50.807: INFO: Updating deployment "nginx-deployment" with a non-existent image May 24 10:56:50.814: INFO: Updating deployment nginx-deployment May 24 10:56:50.814: INFO: Waiting for observed generation 2 May 24 10:56:53.036: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 24 10:56:53.039: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 24 10:56:53.041: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 24 10:56:53.048: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 24 10:56:53.048: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 24 10:56:53.050: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 24 10:56:53.053: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 24 10:56:53.053: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 24 10:56:53.059: INFO: Updating deployment nginx-deployment May 24 10:56:53.059: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 24 10:56:53.131: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 24 10:56:53.327: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 24 10:56:54.174: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-srrrb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-srrrb/deployments/nginx-deployment,UID:3dc07a63-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255529,Generation:3,CreationTimestamp:2020-05-24 10:56:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-24 10:56:51 +0000 UTC 2020-05-24 10:56:37 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-05-24 10:56:53 +0000 UTC 2020-05-24 10:56:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 24 10:56:54.231: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-srrrb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-srrrb/replicasets/nginx-deployment-5c98f8fb5,UID:4571c3db-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255567,Generation:3,CreationTimestamp:2020-05-24 10:56:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 3dc07a63-9dad-11ea-99e8-0242ac110002 0xc001dd6f57 0xc001dd6f58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 24 10:56:54.231: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 24 10:56:54.231: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-srrrb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-srrrb/replicasets/nginx-deployment-85ddf47c5d,UID:3dc32ef4-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255564,Generation:3,CreationTimestamp:2020-05-24 10:56:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 3dc07a63-9dad-11ea-99e8-0242ac110002 0xc001dd7017 0xc001dd7018}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 24 10:56:54.342: INFO: Pod "nginx-deployment-5c98f8fb5-4v854" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4v854,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-5c98f8fb5-4v854,UID:45749714-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255482,Generation:0,CreationTimestamp:2020-05-24 10:56:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4571c3db-9dad-11ea-99e8-0242ac110002 0xc002024477 0xc002024478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020244f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002024510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-24 10:56:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.342: INFO: Pod "nginx-deployment-5c98f8fb5-5lfkp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5lfkp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-5c98f8fb5-5lfkp,UID:47358da1-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255558,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4571c3db-9dad-11ea-99e8-0242ac110002 0xc0020245d7 0xc0020245d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002024650} {node.kubernetes.io/unreachable Exists NoExecute 0xc002024670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.343: INFO: Pod "nginx-deployment-5c98f8fb5-6v9hs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6v9hs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-5c98f8fb5-6v9hs,UID:473589a5-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255556,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4571c3db-9dad-11ea-99e8-0242ac110002 0xc0020246e7 0xc0020246e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002024760} {node.kubernetes.io/unreachable Exists NoExecute 0xc002024780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.343: INFO: Pod "nginx-deployment-5c98f8fb5-8rnsg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8rnsg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-5c98f8fb5-8rnsg,UID:46f12acd-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255548,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4571c3db-9dad-11ea-99e8-0242ac110002 0xc0020247f7 0xc0020247f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002024870} {node.kubernetes.io/unreachable Exists NoExecute 0xc002024890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.343: INFO: Pod "nginx-deployment-5c98f8fb5-b4wbw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b4wbw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-5c98f8fb5-b4wbw,UID:46d28149-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255570,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4571c3db-9dad-11ea-99e8-0242ac110002 0xc002024907 0xc002024908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002024980} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020249a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-24 10:56:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.343: INFO: Pod "nginx-deployment-5c98f8fb5-cds82" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cds82,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-5c98f8fb5-cds82,UID:475b0c6c-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255565,Generation:0,CreationTimestamp:2020-05-24 10:56:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4571c3db-9dad-11ea-99e8-0242ac110002 0xc002024a67 0xc002024a68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002024ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002024b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.343: INFO: Pod "nginx-deployment-5c98f8fb5-gvnsv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gvnsv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-5c98f8fb5-gvnsv,UID:45773790-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255493,Generation:0,CreationTimestamp:2020-05-24 10:56:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4571c3db-9dad-11ea-99e8-0242ac110002 0xc002024b77 0xc002024b78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002024bf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002024c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-24 10:56:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.343: INFO: Pod "nginx-deployment-5c98f8fb5-pmvnr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pmvnr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-5c98f8fb5-pmvnr,UID:45772aad-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255476,Generation:0,CreationTimestamp:2020-05-24 10:56:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4571c3db-9dad-11ea-99e8-0242ac110002 0xc002024cd7 0xc002024cd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002024d50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002024d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-24 10:56:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.344: INFO: Pod "nginx-deployment-5c98f8fb5-qcktk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qcktk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-5c98f8fb5-qcktk,UID:46f13b1a-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255547,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4571c3db-9dad-11ea-99e8-0242ac110002 0xc002024e37 0xc002024e38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002024eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002024ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.344: INFO: Pod "nginx-deployment-5c98f8fb5-rvqsf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rvqsf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-5c98f8fb5-rvqsf,UID:473588c8-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255561,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4571c3db-9dad-11ea-99e8-0242ac110002 0xc002024f47 0xc002024f48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002024fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002024fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.344: INFO: Pod "nginx-deployment-5c98f8fb5-srdbw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-srdbw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-5c98f8fb5-srdbw,UID:4599f852-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255497,Generation:0,CreationTimestamp:2020-05-24 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4571c3db-9dad-11ea-99e8-0242ac110002 0xc002025057 0xc002025058}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020250d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020250f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-24 10:56:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.344: INFO: Pod "nginx-deployment-5c98f8fb5-vsnbp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vsnbp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-5c98f8fb5-vsnbp,UID:473561fb-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255551,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4571c3db-9dad-11ea-99e8-0242ac110002 0xc0020251b7 0xc0020251b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002025230} {node.kubernetes.io/unreachable Exists NoExecute 0xc002025250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.345: INFO: Pod "nginx-deployment-5c98f8fb5-wwfmr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wwfmr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-5c98f8fb5-wwfmr,UID:459697a7-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255494,Generation:0,CreationTimestamp:2020-05-24 10:56:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 4571c3db-9dad-11ea-99e8-0242ac110002 0xc0020252c7 0xc0020252c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002025340} {node.kubernetes.io/unreachable Exists NoExecute 0xc002025360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-24 10:56:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.345: INFO: Pod "nginx-deployment-85ddf47c5d-2ztrt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2ztrt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-2ztrt,UID:473580f9-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255562,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc002025427 0xc002025428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020254a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020254c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.346: INFO: Pod "nginx-deployment-85ddf47c5d-4tw72" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4tw72,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-4tw72,UID:46d2bce6-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255528,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc002025537 0xc002025538}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020255b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020255d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.346: INFO: Pod "nginx-deployment-85ddf47c5d-5hwjj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5hwjj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-5hwjj,UID:3dd42132-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255404,Generation:0,CreationTimestamp:2020-05-24 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc002025647 0xc002025648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020256c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020256e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.4,StartTime:2020-05-24 10:56:38 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-24 10:56:43 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d75d4ca868f6568309c6b24a1196a232860038d7922e4c5815be8e2816c128f6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.346: INFO: Pod "nginx-deployment-85ddf47c5d-9sclv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9sclv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-9sclv,UID:46f12c52-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255546,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc0020257a7 0xc0020257a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002025820} {node.kubernetes.io/unreachable Exists NoExecute 0xc002025840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.347: INFO: Pod "nginx-deployment-85ddf47c5d-b4n9x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b4n9x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-b4n9x,UID:46d2b129-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255575,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc0020258b7 0xc0020258b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002025930} {node.kubernetes.io/unreachable Exists NoExecute 0xc002025950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-24 10:56:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.347: INFO: Pod "nginx-deployment-85ddf47c5d-bfpmf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bfpmf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-bfpmf,UID:46f0e091-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255534,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc002025a07 0xc002025a08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002025a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002025aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.347: INFO: Pod "nginx-deployment-85ddf47c5d-g5nvw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g5nvw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-g5nvw,UID:46f1117a-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255545,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc002025b17 0xc002025b18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002025b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002025bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.347: INFO: Pod "nginx-deployment-85ddf47c5d-gdbdv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gdbdv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-gdbdv,UID:3dd56832-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255434,Generation:0,CreationTimestamp:2020-05-24 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc002025c27 0xc002025c28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002025ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002025cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.237,StartTime:2020-05-24 10:56:38 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-24 10:56:48 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://39edc2170a3c1af0d0c3d6aaa388cb98badbe7d010b1954125e35e1a70849571}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.348: INFO: Pod "nginx-deployment-85ddf47c5d-jn7jz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jn7jz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-jn7jz,UID:47358334-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255553,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc002025d87 0xc002025d88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002025e00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002025e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.348: INFO: Pod "nginx-deployment-85ddf47c5d-lf8jd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lf8jd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-lf8jd,UID:3dd4209c-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255403,Generation:0,CreationTimestamp:2020-05-24 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc002025e97 0xc002025e98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002025f10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002025f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.234,StartTime:2020-05-24 10:56:38 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-24 10:56:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://926bf3e7ee8dad2cb8a8f0eed1c57195e6badb75b33d469557d3b7661ba4b758}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.348: INFO: Pod "nginx-deployment-85ddf47c5d-n8sf2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n8sf2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-n8sf2,UID:473593e4-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255557,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc002025ff7 0xc002025ff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020b2070} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020b2090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.349: INFO: Pod "nginx-deployment-85ddf47c5d-nddtt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nddtt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-nddtt,UID:473598a5-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255559,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc0020b2107 0xc0020b2108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020b2180} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020b21a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.349: INFO: Pod "nginx-deployment-85ddf47c5d-p7l65" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p7l65,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-p7l65,UID:3ddac867-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255439,Generation:0,CreationTimestamp:2020-05-24 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc0020b2217 0xc0020b2218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020b2290} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020b22b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.7,StartTime:2020-05-24 10:56:38 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-24 10:56:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4f166c155e0f5e59d370b46c480b2cfae3f19c3e43b25a5c5e543d392046c73a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.350: INFO: Pod "nginx-deployment-85ddf47c5d-pkfrl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pkfrl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-pkfrl,UID:3ddabed6-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255419,Generation:0,CreationTimestamp:2020-05-24 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc0020b2377 0xc0020b2378}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020b23f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020b2410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.6,StartTime:2020-05-24 10:56:38 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-24 10:56:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://da5987dff74642e9c2e619d1ceffa321131a40f0243ee807baa7cd04dfffb7a3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.350: INFO: Pod "nginx-deployment-85ddf47c5d-q2j5n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q2j5n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-q2j5n,UID:46f1180f-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255549,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc0020b24d7 0xc0020b24d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020b2550} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020b2570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.351: INFO: Pod "nginx-deployment-85ddf47c5d-qwsvh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qwsvh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-qwsvh,UID:3dd563a6-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255430,Generation:0,CreationTimestamp:2020-05-24 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc0020b25e7 0xc0020b25e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020b2660} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020b2680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.235,StartTime:2020-05-24 10:56:38 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-24 10:56:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://648f2189a7641621b51ec9539cbeaf4e4b93f9a2736db0d9d733c0c52e6280d3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.351: INFO: Pod "nginx-deployment-85ddf47c5d-qxfl2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qxfl2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-qxfl2,UID:3dcc5634-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255389,Generation:0,CreationTimestamp:2020-05-24 10:56:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc0020b2747 0xc0020b2748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020b27c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020b27e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.3,StartTime:2020-05-24 10:56:38 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-24 10:56:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1dc5ed9aa8a945a4863537d7261ca56d03e2a9c600d4e50d537b26f494403d69}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.351: INFO: Pod "nginx-deployment-85ddf47c5d-v2rxl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v2rxl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-v2rxl,UID:3dd569e0-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255410,Generation:0,CreationTimestamp:2020-05-24 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc0020b28a7 0xc0020b28a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020b2920} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020b2940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.5,StartTime:2020-05-24 10:56:38 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-24 10:56:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://02ff7338cf81e28ad97c73c8fe548f543953d0abc9dec2ce5e52bb62ceb50536}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.352: INFO: Pod "nginx-deployment-85ddf47c5d-vmrxq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vmrxq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-vmrxq,UID:47359ed2-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255560,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc0020b2a07 0xc0020b2a08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020b2a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020b2aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 24 10:56:54.352: INFO: Pod "nginx-deployment-85ddf47c5d-zcp89" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zcp89,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-srrrb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-srrrb/pods/nginx-deployment-85ddf47c5d-zcp89,UID:46cc1fc3-9dad-11ea-99e8-0242ac110002,ResourceVersion:12255566,Generation:0,CreationTimestamp:2020-05-24 10:56:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 3dc32ef4-9dad-11ea-99e8-0242ac110002 0xc0020b2b17 0xc0020b2b18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ngt4w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ngt4w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ngt4w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020b2b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020b2bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 10:56:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-24 10:56:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:56:54.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-srrrb" for this suite. May 24 10:57:18.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:57:18.650: INFO: namespace: e2e-tests-deployment-srrrb, resource: bindings, ignored listing per whitelist May 24 10:57:18.671: INFO: namespace e2e-tests-deployment-srrrb deletion completed in 24.160810508s • [SLOW TEST:40.948 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:57:18.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0524 10:57:28.937887 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 10:57:28.937: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:57:28.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-6qdfw" for this suite. May 24 10:57:34.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:57:34.972: INFO: namespace: e2e-tests-gc-6qdfw, resource: bindings, ignored listing per whitelist May 24 10:57:35.025: INFO: namespace e2e-tests-gc-6qdfw deletion completed in 6.083509902s • [SLOW TEST:16.353 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:57:35.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:58:35.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-kqtqd" for this suite. May 24 10:58:57.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:58:57.314: INFO: namespace: e2e-tests-container-probe-kqtqd, resource: bindings, ignored listing per whitelist May 24 10:58:57.314: INFO: namespace e2e-tests-container-probe-kqtqd deletion completed in 22.111488276s • [SLOW TEST:82.289 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:58:57.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-8m6sn/configmap-test-90e17b90-9dad-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 10:58:57.417: INFO: Waiting up to 5m0s for pod "pod-configmaps-90e449e8-9dad-11ea-9618-0242ac110016" in namespace "e2e-tests-configmap-8m6sn" to be "success or failure" May 24 10:58:57.421: INFO: Pod "pod-configmaps-90e449e8-9dad-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27683ms May 24 10:58:59.425: INFO: Pod "pod-configmaps-90e449e8-9dad-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008265025s May 24 10:59:01.429: INFO: Pod "pod-configmaps-90e449e8-9dad-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012670728s STEP: Saw pod success May 24 10:59:01.429: INFO: Pod "pod-configmaps-90e449e8-9dad-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 10:59:01.432: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-90e449e8-9dad-11ea-9618-0242ac110016 container env-test: STEP: delete the pod May 24 10:59:01.480: INFO: Waiting for pod pod-configmaps-90e449e8-9dad-11ea-9618-0242ac110016 to disappear May 24 10:59:01.505: INFO: Pod pod-configmaps-90e449e8-9dad-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 10:59:01.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8m6sn" for this suite. May 24 10:59:07.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 10:59:07.582: INFO: namespace: e2e-tests-configmap-8m6sn, resource: bindings, ignored listing per whitelist May 24 10:59:07.598: INFO: namespace e2e-tests-configmap-8m6sn deletion completed in 6.089230043s • [SLOW TEST:10.283 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 10:59:07.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-x9nng [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 24 10:59:07.756: INFO: Found 0 stateful pods, waiting for 3 May 24 10:59:17.764: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 10:59:17.764: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 10:59:17.764: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 24 10:59:17.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9nng ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 24 10:59:18.108: INFO: stderr: "I0524 10:59:17.948965 394 log.go:172] (0xc0007fe2c0) (0xc00072a640) Create stream\nI0524 10:59:17.949076 394 log.go:172] (0xc0007fe2c0) (0xc00072a640) Stream added, broadcasting: 1\nI0524 10:59:17.954985 394 log.go:172] (0xc0007fe2c0) Reply frame received for 1\nI0524 10:59:17.955039 394 log.go:172] (0xc0007fe2c0) (0xc00072a6e0) Create stream\nI0524 10:59:17.955052 394 log.go:172] (0xc0007fe2c0) (0xc00072a6e0) Stream added, broadcasting: 3\nI0524 10:59:17.955930 394 log.go:172] (0xc0007fe2c0) Reply frame received for 3\nI0524 10:59:17.955957 394 log.go:172] (0xc0007fe2c0) (0xc0005aedc0) Create stream\nI0524 10:59:17.955965 394 log.go:172] (0xc0007fe2c0) (0xc0005aedc0) Stream added, broadcasting: 5\nI0524 10:59:17.956858 394 log.go:172] (0xc0007fe2c0) Reply frame received for 5\nI0524 10:59:18.100446 394 log.go:172] (0xc0007fe2c0) Data frame received for 3\nI0524 10:59:18.100503 394 log.go:172] (0xc00072a6e0) (3) Data frame handling\nI0524 10:59:18.100548 394 log.go:172] (0xc00072a6e0) (3) Data frame sent\nI0524 10:59:18.101047 394 log.go:172] (0xc0007fe2c0) Data frame received for 3\nI0524 10:59:18.101267 394 log.go:172] (0xc00072a6e0) (3) Data frame handling\nI0524 10:59:18.102030 394 log.go:172] (0xc0007fe2c0) Data frame received for 5\nI0524 10:59:18.102059 394 log.go:172] (0xc0005aedc0) (5) Data frame handling\nI0524 10:59:18.103592 394 log.go:172] (0xc0007fe2c0) Data frame received for 1\nI0524 10:59:18.103607 394 log.go:172] (0xc00072a640) (1) Data frame handling\nI0524 10:59:18.103619 394 log.go:172] (0xc00072a640) (1) Data frame sent\nI0524 10:59:18.103645 394 log.go:172] (0xc0007fe2c0) (0xc00072a640) Stream removed, broadcasting: 1\nI0524 10:59:18.103776 394 log.go:172] (0xc0007fe2c0) Go away received\nI0524 10:59:18.103823 394 log.go:172] (0xc0007fe2c0) (0xc00072a640) Stream removed, broadcasting: 1\nI0524 10:59:18.103835 394 log.go:172] (0xc0007fe2c0) (0xc00072a6e0) Stream removed, broadcasting: 3\nI0524 10:59:18.103842 394 log.go:172] (0xc0007fe2c0) (0xc0005aedc0) Stream removed, broadcasting: 5\n" May 24 10:59:18.108: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 24 10:59:18.108: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 24 10:59:28.141: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 24 10:59:38.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9nng ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 10:59:38.504: INFO: stderr: "I0524 10:59:38.422036 416 log.go:172] (0xc0003a2420) (0xc0006fa640) Create stream\nI0524 10:59:38.422091 416 log.go:172] (0xc0003a2420) (0xc0006fa640) Stream added, broadcasting: 1\nI0524 10:59:38.423990 416 log.go:172] (0xc0003a2420) Reply frame received for 1\nI0524 10:59:38.424032 416 log.go:172] (0xc0003a2420) (0xc0006fa6e0) Create stream\nI0524 10:59:38.424049 416 log.go:172] (0xc0003a2420) (0xc0006fa6e0) Stream added, broadcasting: 3\nI0524 10:59:38.424879 416 log.go:172] (0xc0003a2420) Reply frame received for 3\nI0524 10:59:38.424905 416 log.go:172] (0xc0003a2420) (0xc0006fa780) Create stream\nI0524 10:59:38.424913 416 log.go:172] (0xc0003a2420) (0xc0006fa780) Stream added, broadcasting: 5\nI0524 10:59:38.425871 416 log.go:172] (0xc0003a2420) Reply frame received for 5\nI0524 10:59:38.498622 416 log.go:172] (0xc0003a2420) Data frame received for 5\nI0524 10:59:38.498660 416 log.go:172] (0xc0006fa780) (5) Data frame handling\nI0524 10:59:38.498673 416 log.go:172] (0xc0003a2420) Data frame received for 3\nI0524 10:59:38.498696 416 log.go:172] (0xc0006fa6e0) (3) Data frame handling\nI0524 10:59:38.498710 416 log.go:172] (0xc0006fa6e0) (3) Data frame sent\nI0524 10:59:38.498716 416 log.go:172] (0xc0003a2420) Data frame received for 3\nI0524 10:59:38.498721 416 log.go:172] (0xc0006fa6e0) (3) Data frame handling\nI0524 10:59:38.499933 416 log.go:172] (0xc0003a2420) Data frame received for 1\nI0524 10:59:38.499951 416 log.go:172] (0xc0006fa640) (1) Data frame handling\nI0524 10:59:38.499962 416 log.go:172] (0xc0006fa640) (1) Data frame sent\nI0524 10:59:38.499971 416 log.go:172] (0xc0003a2420) (0xc0006fa640) Stream removed, broadcasting: 1\nI0524 10:59:38.500032 416 log.go:172] (0xc0003a2420) Go away received\nI0524 10:59:38.500136 416 log.go:172] (0xc0003a2420) (0xc0006fa640) Stream removed, broadcasting: 1\nI0524 10:59:38.500154 416 log.go:172] (0xc0003a2420) (0xc0006fa6e0) Stream removed, broadcasting: 3\nI0524 10:59:38.500162 416 log.go:172] (0xc0003a2420) (0xc0006fa780) Stream removed, broadcasting: 5\n" May 24 10:59:38.504: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 24 10:59:38.504: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 24 10:59:58.521: INFO: Waiting for StatefulSet e2e-tests-statefulset-x9nng/ss2 to complete update May 24 10:59:58.521: INFO: Waiting for Pod e2e-tests-statefulset-x9nng/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 24 11:00:08.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9nng ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 24 11:00:08.865: INFO: stderr: "I0524 11:00:08.662961 437 log.go:172] (0xc00020e370) (0xc00072a640) Create stream\nI0524 11:00:08.663030 437 log.go:172] (0xc00020e370) (0xc00072a640) Stream added, broadcasting: 1\nI0524 11:00:08.665589 437 log.go:172] (0xc00020e370) Reply frame received for 1\nI0524 11:00:08.665644 437 log.go:172] (0xc00020e370) (0xc0007acbe0) Create stream\nI0524 11:00:08.665659 437 log.go:172] (0xc00020e370) (0xc0007acbe0) Stream added, broadcasting: 3\nI0524 11:00:08.666716 437 log.go:172] (0xc00020e370) Reply frame received for 3\nI0524 11:00:08.666767 437 log.go:172] (0xc00020e370) (0xc0002dc000) Create stream\nI0524 11:00:08.666780 437 log.go:172] (0xc00020e370) (0xc0002dc000) Stream added, broadcasting: 5\nI0524 11:00:08.667835 437 log.go:172] (0xc00020e370) Reply frame received for 5\nI0524 11:00:08.856952 437 log.go:172] (0xc00020e370) Data frame received for 3\nI0524 11:00:08.857323 437 log.go:172] (0xc0007acbe0) (3) Data frame handling\nI0524 11:00:08.857358 437 log.go:172] (0xc0007acbe0) (3) Data frame sent\nI0524 11:00:08.857379 437 log.go:172] (0xc00020e370) Data frame received for 3\nI0524 11:00:08.857397 437 log.go:172] (0xc0007acbe0) (3) Data frame handling\nI0524 11:00:08.857589 437 log.go:172] (0xc00020e370) Data frame received for 5\nI0524 11:00:08.857628 437 log.go:172] (0xc0002dc000) (5) Data frame handling\nI0524 11:00:08.859953 437 log.go:172] (0xc00020e370) Data frame received for 1\nI0524 11:00:08.859980 437 log.go:172] (0xc00072a640) (1) Data frame handling\nI0524 11:00:08.860004 437 log.go:172] (0xc00072a640) (1) Data frame sent\nI0524 11:00:08.860024 437 log.go:172] (0xc00020e370) (0xc00072a640) Stream removed, broadcasting: 1\nI0524 11:00:08.860141 437 log.go:172] (0xc00020e370) Go away received\nI0524 11:00:08.860330 437 log.go:172] (0xc00020e370) (0xc00072a640) Stream removed, broadcasting: 1\nI0524 11:00:08.860360 437 log.go:172] (0xc00020e370) (0xc0007acbe0) Stream removed, broadcasting: 3\nI0524 11:00:08.860381 437 log.go:172] (0xc00020e370) (0xc0002dc000) Stream removed, broadcasting: 5\n" May 24 11:00:08.865: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 24 11:00:08.865: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 24 11:00:18.899: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 24 11:00:28.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x9nng ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:00:29.164: INFO: stderr: "I0524 11:00:29.063182 459 log.go:172] (0xc000162840) (0xc00077a640) Create stream\nI0524 11:00:29.063230 459 log.go:172] (0xc000162840) (0xc00077a640) Stream added, broadcasting: 1\nI0524 11:00:29.065926 459 log.go:172] (0xc000162840) Reply frame received for 1\nI0524 11:00:29.065971 459 log.go:172] (0xc000162840) (0xc0005a8be0) Create stream\nI0524 11:00:29.065985 459 log.go:172] (0xc000162840) (0xc0005a8be0) Stream added, broadcasting: 3\nI0524 11:00:29.066855 459 log.go:172] (0xc000162840) Reply frame received for 3\nI0524 11:00:29.066913 459 log.go:172] (0xc000162840) (0xc00071c000) Create stream\nI0524 11:00:29.066930 459 log.go:172] (0xc000162840) (0xc00071c000) Stream added, broadcasting: 5\nI0524 11:00:29.067934 459 log.go:172] (0xc000162840) Reply frame received for 5\nI0524 11:00:29.156620 459 log.go:172] (0xc000162840) Data frame received for 3\nI0524 11:00:29.156663 459 log.go:172] (0xc0005a8be0) (3) Data frame handling\nI0524 11:00:29.156699 459 log.go:172] (0xc0005a8be0) (3) Data frame sent\nI0524 11:00:29.156718 459 log.go:172] (0xc000162840) Data frame received for 3\nI0524 11:00:29.156733 459 log.go:172] (0xc0005a8be0) (3) Data frame handling\nI0524 11:00:29.156853 459 log.go:172] (0xc000162840) Data frame received for 5\nI0524 11:00:29.156885 459 log.go:172] (0xc00071c000) (5) Data frame handling\nI0524 11:00:29.158774 459 log.go:172] (0xc000162840) Data frame received for 1\nI0524 11:00:29.158811 459 log.go:172] (0xc00077a640) (1) Data frame handling\nI0524 11:00:29.158847 459 log.go:172] (0xc00077a640) (1) Data frame sent\nI0524 11:00:29.158882 459 log.go:172] (0xc000162840) (0xc00077a640) Stream removed, broadcasting: 1\nI0524 11:00:29.158911 459 log.go:172] (0xc000162840) Go away received\nI0524 11:00:29.159121 459 log.go:172] (0xc000162840) (0xc00077a640) Stream removed, broadcasting: 1\nI0524 11:00:29.159141 459 log.go:172] (0xc000162840) (0xc0005a8be0) Stream removed, broadcasting: 3\nI0524 11:00:29.159154 459 log.go:172] (0xc000162840) (0xc00071c000) Stream removed, broadcasting: 5\n" May 24 11:00:29.164: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 24 11:00:29.164: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 24 11:00:49.186: INFO: Deleting all statefulset in ns e2e-tests-statefulset-x9nng May 24 11:00:49.189: INFO: Scaling statefulset ss2 to 0 May 24 11:01:09.207: INFO: Waiting for statefulset status.replicas updated to 0 May 24 11:01:09.210: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:01:09.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-x9nng" for this suite. May 24 11:01:17.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:01:17.252: INFO: namespace: e2e-tests-statefulset-x9nng, resource: bindings, ignored listing per whitelist May 24 11:01:17.315: INFO: namespace e2e-tests-statefulset-x9nng deletion completed in 8.089569161s • [SLOW TEST:129.717 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:01:17.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 24 11:01:17.960: INFO: created pod pod-service-account-defaultsa May 24 11:01:17.960: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 24 11:01:17.993: INFO: created pod pod-service-account-mountsa May 24 11:01:17.993: INFO: pod pod-service-account-mountsa service account token volume mount: true May 24 11:01:18.000: INFO: created pod pod-service-account-nomountsa May 24 11:01:18.000: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 24 11:01:18.040: INFO: created pod pod-service-account-defaultsa-mountspec May 24 11:01:18.040: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 24 11:01:18.054: INFO: created pod pod-service-account-mountsa-mountspec May 24 11:01:18.054: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 24 11:01:18.076: INFO: created pod pod-service-account-nomountsa-mountspec May 24 11:01:18.076: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 24 11:01:18.136: INFO: created pod pod-service-account-defaultsa-nomountspec May 24 11:01:18.136: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 24 11:01:18.166: INFO: created pod pod-service-account-mountsa-nomountspec May 24 11:01:18.166: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 24 11:01:18.190: INFO: created pod pod-service-account-nomountsa-nomountspec May 24 11:01:18.190: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:01:18.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-nwddk" for this suite. May 24 11:01:48.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:01:48.334: INFO: namespace: e2e-tests-svcaccounts-nwddk, resource: bindings, ignored listing per whitelist May 24 11:01:48.366: INFO: namespace e2e-tests-svcaccounts-nwddk deletion completed in 30.139407649s • [SLOW TEST:31.051 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:01:48.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 24 11:01:48.500: INFO: Waiting up to 5m0s for pod "pod-f6df1282-9dad-11ea-9618-0242ac110016" in namespace "e2e-tests-emptydir-kk2pz" to be "success or failure" May 24 11:01:48.512: INFO: Pod "pod-f6df1282-9dad-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 12.861691ms May 24 11:01:50.669: INFO: Pod "pod-f6df1282-9dad-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169633548s May 24 11:01:52.674: INFO: Pod "pod-f6df1282-9dad-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174314397s STEP: Saw pod success May 24 11:01:52.674: INFO: Pod "pod-f6df1282-9dad-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:01:52.678: INFO: Trying to get logs from node hunter-worker2 pod pod-f6df1282-9dad-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 11:01:52.703: INFO: Waiting for pod pod-f6df1282-9dad-11ea-9618-0242ac110016 to disappear May 24 11:01:52.707: INFO: Pod pod-f6df1282-9dad-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:01:52.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kk2pz" for this suite. May 24 11:01:58.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:01:58.808: INFO: namespace: e2e-tests-emptydir-kk2pz, resource: bindings, ignored listing per whitelist May 24 11:01:58.848: INFO: namespace e2e-tests-emptydir-kk2pz deletion completed in 6.138481772s • [SLOW TEST:10.482 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:01:58.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 24 11:01:59.012: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:01:59.014: INFO: Number of nodes with available pods: 0 May 24 11:01:59.014: INFO: Node hunter-worker is running more than one daemon pod May 24 11:02:00.020: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:02:00.024: INFO: Number of nodes with available pods: 0 May 24 11:02:00.024: INFO: Node hunter-worker is running more than one daemon pod May 24 11:02:01.047: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:02:01.050: INFO: Number of nodes with available pods: 0 May 24 11:02:01.050: INFO: Node hunter-worker is running more than one daemon pod May 24 11:02:02.059: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:02:02.063: INFO: Number of nodes with available pods: 0 May 24 11:02:02.063: INFO: Node hunter-worker is running more than one daemon pod May 24 11:02:03.019: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:02:03.023: INFO: Number of nodes with available pods: 1 May 24 11:02:03.023: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:02:04.020: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:02:04.024: INFO: Number of nodes with available pods: 2 May 24 11:02:04.024: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 24 11:02:04.038: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:02:04.083: INFO: Number of nodes with available pods: 1 May 24 11:02:04.083: INFO: Node hunter-worker is running more than one daemon pod May 24 11:02:05.101: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:02:05.104: INFO: Number of nodes with available pods: 1 May 24 11:02:05.104: INFO: Node hunter-worker is running more than one daemon pod May 24 11:02:06.119: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:02:06.123: INFO: Number of nodes with available pods: 1 May 24 11:02:06.123: INFO: Node hunter-worker is running more than one daemon pod May 24 11:02:07.087: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:02:07.091: INFO: Number of nodes with available pods: 1 May 24 11:02:07.091: INFO: Node hunter-worker is running more than one daemon pod May 24 11:02:08.088: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:02:08.091: INFO: Number of nodes with available pods: 2 May 24 11:02:08.091: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-79pdw, will wait for the garbage collector to delete the pods May 24 11:02:08.157: INFO: Deleting DaemonSet.extensions daemon-set took: 6.480865ms May 24 11:02:08.257: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.347435ms May 24 11:02:21.360: INFO: Number of nodes with available pods: 0 May 24 11:02:21.360: INFO: Number of running nodes: 0, number of available pods: 0 May 24 11:02:21.366: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-79pdw/daemonsets","resourceVersion":"12257039"},"items":null} May 24 11:02:21.369: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-79pdw/pods","resourceVersion":"12257039"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:02:21.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-79pdw" for this suite. May 24 11:02:27.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:02:27.445: INFO: namespace: e2e-tests-daemonsets-79pdw, resource: bindings, ignored listing per whitelist May 24 11:02:27.464: INFO: namespace e2e-tests-daemonsets-79pdw deletion completed in 6.081732269s • [SLOW TEST:28.615 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:02:27.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-r69xs A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-r69xs;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-r69xs A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-r69xs;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-r69xs.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-r69xs.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-r69xs.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-r69xs.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-r69xs.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-r69xs.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-r69xs.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-r69xs.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-r69xs.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 170.104.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.104.170_udp@PTR;check="$$(dig +tcp +noall +answer +search 170.104.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.104.170_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-r69xs A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-r69xs;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-r69xs A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-r69xs;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-r69xs.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-r69xs.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-r69xs.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-r69xs.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-r69xs.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-r69xs.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-r69xs.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-r69xs.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-r69xs.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 170.104.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.104.170_udp@PTR;check="$$(dig +tcp +noall +answer +search 170.104.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.104.170_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 11:02:35.875: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:35.897: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:35.900: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:35.903: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r69xs from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:35.907: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r69xs from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:35.910: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:35.913: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:35.916: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:35.919: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:35.939: INFO: Lookups using e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-r69xs jessie_tcp@dns-test-service.e2e-tests-dns-r69xs jessie_udp@dns-test-service.e2e-tests-dns-r69xs.svc jessie_tcp@dns-test-service.e2e-tests-dns-r69xs.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc] May 24 11:02:40.962: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:41.029: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:41.032: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:41.035: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r69xs from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:41.038: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r69xs from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:41.065: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:41.069: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:41.072: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:41.075: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:41.094: INFO: Lookups using e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-r69xs jessie_tcp@dns-test-service.e2e-tests-dns-r69xs jessie_udp@dns-test-service.e2e-tests-dns-r69xs.svc jessie_tcp@dns-test-service.e2e-tests-dns-r69xs.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc] May 24 11:02:45.962: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:45.988: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:45.991: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:45.994: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r69xs from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:45.997: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r69xs from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:46.000: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:46.004: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:46.007: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:46.010: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:46.027: INFO: Lookups using e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-r69xs jessie_tcp@dns-test-service.e2e-tests-dns-r69xs jessie_udp@dns-test-service.e2e-tests-dns-r69xs.svc jessie_tcp@dns-test-service.e2e-tests-dns-r69xs.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc] May 24 11:02:50.970: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:51.012: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:51.015: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:51.018: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r69xs from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:51.021: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r69xs from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:51.024: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:51.027: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:51.031: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:51.035: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:51.084: INFO: Lookups using e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-r69xs jessie_tcp@dns-test-service.e2e-tests-dns-r69xs jessie_udp@dns-test-service.e2e-tests-dns-r69xs.svc jessie_tcp@dns-test-service.e2e-tests-dns-r69xs.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc] May 24 11:02:55.963: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:55.987: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:55.990: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:55.993: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r69xs from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:55.996: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r69xs from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:56.000: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:56.004: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:56.007: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:56.010: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:02:56.024: INFO: Lookups using e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-r69xs jessie_tcp@dns-test-service.e2e-tests-dns-r69xs jessie_udp@dns-test-service.e2e-tests-dns-r69xs.svc jessie_tcp@dns-test-service.e2e-tests-dns-r69xs.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc] May 24 11:03:00.974: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:03:01.048: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:03:01.051: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:03:01.054: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r69xs from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:03:01.057: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r69xs from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:03:01.060: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:03:01.062: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:03:01.065: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:03:01.068: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc from pod e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016: the server could not find the requested resource (get pods dns-test-0e380f11-9dae-11ea-9618-0242ac110016) May 24 11:03:01.087: INFO: Lookups using e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-r69xs jessie_tcp@dns-test-service.e2e-tests-dns-r69xs jessie_udp@dns-test-service.e2e-tests-dns-r69xs.svc jessie_tcp@dns-test-service.e2e-tests-dns-r69xs.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-r69xs.svc] May 24 11:03:06.028: INFO: DNS probes using e2e-tests-dns-r69xs/dns-test-0e380f11-9dae-11ea-9618-0242ac110016 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:03:06.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-r69xs" for this suite. May 24 11:03:12.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:03:12.744: INFO: namespace: e2e-tests-dns-r69xs, resource: bindings, ignored listing per whitelist May 24 11:03:12.764: INFO: namespace e2e-tests-dns-r69xs deletion completed in 6.404271546s • [SLOW TEST:45.300 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:03:12.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-292e1d6e-9dae-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 11:03:12.922: INFO: Waiting up to 5m0s for pod "pod-configmaps-2930a198-9dae-11ea-9618-0242ac110016" in namespace "e2e-tests-configmap-hsjrw" to be "success or failure" May 24 11:03:12.926: INFO: Pod "pod-configmaps-2930a198-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.151851ms May 24 11:03:14.930: INFO: Pod "pod-configmaps-2930a198-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007142806s May 24 11:03:16.934: INFO: Pod "pod-configmaps-2930a198-9dae-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011780577s STEP: Saw pod success May 24 11:03:16.934: INFO: Pod "pod-configmaps-2930a198-9dae-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:03:16.937: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-2930a198-9dae-11ea-9618-0242ac110016 container configmap-volume-test: STEP: delete the pod May 24 11:03:16.956: INFO: Waiting for pod pod-configmaps-2930a198-9dae-11ea-9618-0242ac110016 to disappear May 24 11:03:16.961: INFO: Pod pod-configmaps-2930a198-9dae-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:03:16.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hsjrw" for this suite. May 24 11:03:23.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:03:23.112: INFO: namespace: e2e-tests-configmap-hsjrw, resource: bindings, ignored listing per whitelist May 24 11:03:23.119: INFO: namespace e2e-tests-configmap-hsjrw deletion completed in 6.155508403s • [SLOW TEST:10.355 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:03:23.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 11:03:23.217: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 15.488845ms) May 24 11:03:23.221: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.054969ms) May 24 11:03:23.224: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.481484ms) May 24 11:03:23.226: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.632952ms) May 24 11:03:23.229: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.734418ms) May 24 11:03:23.232: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.602004ms) May 24 11:03:23.234: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.857601ms) May 24 11:03:23.237: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.908407ms) May 24 11:03:23.241: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.321054ms) May 24 11:03:23.264: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 23.025005ms) May 24 11:03:23.270: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 6.156614ms) May 24 11:03:23.274: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.987676ms) May 24 11:03:23.277: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.834695ms) May 24 11:03:23.279: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.370847ms) May 24 11:03:23.282: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.739763ms) May 24 11:03:23.285: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.390373ms) May 24 11:03:23.287: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.659087ms) May 24 11:03:23.290: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.651133ms) May 24 11:03:23.293: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.918924ms) May 24 11:03:23.296: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.904685ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:03:23.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-m5vm5" for this suite. May 24 11:03:29.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:03:29.357: INFO: namespace: e2e-tests-proxy-m5vm5, resource: bindings, ignored listing per whitelist May 24 11:03:29.410: INFO: namespace e2e-tests-proxy-m5vm5 deletion completed in 6.110856069s • [SLOW TEST:6.290 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:03:29.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 11:03:29.529: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:03:30.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-gtcdw" for this suite. May 24 11:03:36.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:03:36.694: INFO: namespace: e2e-tests-custom-resource-definition-gtcdw, resource: bindings, ignored listing per whitelist May 24 11:03:36.719: INFO: namespace e2e-tests-custom-resource-definition-gtcdw deletion completed in 6.104550351s • [SLOW TEST:7.309 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:03:36.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 24 11:03:36.857: INFO: Waiting up to 5m0s for pod "pod-3774f4ea-9dae-11ea-9618-0242ac110016" in namespace "e2e-tests-emptydir-wkt4b" to be "success or failure" May 24 11:03:36.875: INFO: Pod "pod-3774f4ea-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 18.265548ms May 24 11:03:38.886: INFO: Pod "pod-3774f4ea-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029445021s May 24 11:03:40.890: INFO: Pod "pod-3774f4ea-9dae-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033323205s STEP: Saw pod success May 24 11:03:40.890: INFO: Pod "pod-3774f4ea-9dae-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:03:40.893: INFO: Trying to get logs from node hunter-worker pod pod-3774f4ea-9dae-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 11:03:40.916: INFO: Waiting for pod pod-3774f4ea-9dae-11ea-9618-0242ac110016 to disappear May 24 11:03:40.976: INFO: Pod pod-3774f4ea-9dae-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:03:40.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wkt4b" for this suite. May 24 11:03:46.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:03:47.060: INFO: namespace: e2e-tests-emptydir-wkt4b, resource: bindings, ignored listing per whitelist May 24 11:03:47.067: INFO: namespace e2e-tests-emptydir-wkt4b deletion completed in 6.087420613s • [SLOW TEST:10.348 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:03:47.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 11:03:47.179: INFO: Creating deployment "test-recreate-deployment" May 24 11:03:47.203: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 24 11:03:47.221: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 24 11:03:49.228: INFO: Waiting deployment "test-recreate-deployment" to complete May 24 11:03:49.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915027, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915027, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915027, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915027, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 11:03:51.235: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 24 11:03:51.266: INFO: Updating deployment test-recreate-deployment May 24 11:03:51.266: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 24 11:03:51.810: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-tf7zq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tf7zq/deployments/test-recreate-deployment,UID:3d9d441e-9dae-11ea-99e8-0242ac110002,ResourceVersion:12257406,Generation:2,CreationTimestamp:2020-05-24 11:03:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-24 11:03:51 +0000 UTC 2020-05-24 11:03:51 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-24 11:03:51 +0000 UTC 2020-05-24 11:03:47 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 24 11:03:51.815: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-tf7zq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tf7zq/replicasets/test-recreate-deployment-589c4bfd,UID:4019da48-9dae-11ea-99e8-0242ac110002,ResourceVersion:12257404,Generation:1,CreationTimestamp:2020-05-24 11:03:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3d9d441e-9dae-11ea-99e8-0242ac110002 0xc00201b8af 0xc00201b8c0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 24 11:03:51.815: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 24 11:03:51.816: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-tf7zq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tf7zq/replicasets/test-recreate-deployment-5bf7f65dc,UID:3da379a1-9dae-11ea-99e8-0242ac110002,ResourceVersion:12257395,Generation:2,CreationTimestamp:2020-05-24 11:03:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3d9d441e-9dae-11ea-99e8-0242ac110002 0xc00201b980 0xc00201b981}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 24 11:03:51.862: INFO: Pod "test-recreate-deployment-589c4bfd-82q2w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-82q2w,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-tf7zq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tf7zq/pods/test-recreate-deployment-589c4bfd-82q2w,UID:4021a87d-9dae-11ea-99e8-0242ac110002,ResourceVersion:12257407,Generation:0,CreationTimestamp:2020-05-24 11:03:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 4019da48-9dae-11ea-99e8-0242ac110002 0xc001dba50f 0xc001dba520}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4vp6b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4vp6b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4vp6b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001dba590} {node.kubernetes.io/unreachable Exists NoExecute 0xc001dba6f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:03:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:03:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:03:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:03:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-24 11:03:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:03:51.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-tf7zq" for this suite. May 24 11:03:57.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:03:57.989: INFO: namespace: e2e-tests-deployment-tf7zq, resource: bindings, ignored listing per whitelist May 24 11:03:57.995: INFO: namespace e2e-tests-deployment-tf7zq deletion completed in 6.128673898s • [SLOW TEST:10.927 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:03:57.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 24 11:03:58.128: INFO: Waiting up to 5m0s for pod "pod-4421c2ef-9dae-11ea-9618-0242ac110016" in namespace "e2e-tests-emptydir-9jbvt" to be "success or failure" May 24 11:03:58.131: INFO: Pod "pod-4421c2ef-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.912794ms May 24 11:04:00.135: INFO: Pod "pod-4421c2ef-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007065042s May 24 11:04:02.140: INFO: Pod "pod-4421c2ef-9dae-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011535251s STEP: Saw pod success May 24 11:04:02.140: INFO: Pod "pod-4421c2ef-9dae-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:04:02.143: INFO: Trying to get logs from node hunter-worker pod pod-4421c2ef-9dae-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 11:04:02.163: INFO: Waiting for pod pod-4421c2ef-9dae-11ea-9618-0242ac110016 to disappear May 24 11:04:02.181: INFO: Pod pod-4421c2ef-9dae-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:04:02.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9jbvt" for this suite. May 24 11:04:08.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:04:08.218: INFO: namespace: e2e-tests-emptydir-9jbvt, resource: bindings, ignored listing per whitelist May 24 11:04:08.286: INFO: namespace e2e-tests-emptydir-9jbvt deletion completed in 6.101199429s • [SLOW TEST:10.290 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:04:08.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 24 11:04:08.409: INFO: Waiting up to 5m0s for pod "client-containers-4a41fdd5-9dae-11ea-9618-0242ac110016" in namespace "e2e-tests-containers-ssltp" to be "success or failure" May 24 11:04:08.450: INFO: Pod "client-containers-4a41fdd5-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 40.767213ms May 24 11:04:10.454: INFO: Pod "client-containers-4a41fdd5-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044417354s May 24 11:04:12.458: INFO: Pod "client-containers-4a41fdd5-9dae-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 4.048535106s May 24 11:04:14.462: INFO: Pod "client-containers-4a41fdd5-9dae-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052898056s STEP: Saw pod success May 24 11:04:14.462: INFO: Pod "client-containers-4a41fdd5-9dae-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:04:14.464: INFO: Trying to get logs from node hunter-worker2 pod client-containers-4a41fdd5-9dae-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 11:04:14.496: INFO: Waiting for pod client-containers-4a41fdd5-9dae-11ea-9618-0242ac110016 to disappear May 24 11:04:14.539: INFO: Pod client-containers-4a41fdd5-9dae-11ea-9618-0242ac110016 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:04:14.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-ssltp" for this suite. May 24 11:04:20.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:04:20.630: INFO: namespace: e2e-tests-containers-ssltp, resource: bindings, ignored listing per whitelist May 24 11:04:20.637: INFO: namespace e2e-tests-containers-ssltp deletion completed in 6.093897203s • [SLOW TEST:12.351 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:04:20.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 24 11:04:20.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-gcmj4' May 24 11:04:23.359: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 24 11:04:23.359: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 24 11:04:25.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-gcmj4' May 24 11:04:25.845: INFO: stderr: "" May 24 11:04:25.845: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:04:25.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gcmj4" for this suite. May 24 11:04:32.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:04:32.161: INFO: namespace: e2e-tests-kubectl-gcmj4, resource: bindings, ignored listing per whitelist May 24 11:04:32.203: INFO: namespace e2e-tests-kubectl-gcmj4 deletion completed in 6.177678399s • [SLOW TEST:11.565 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:04:32.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 24 11:04:32.282: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 24 11:04:32.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8j44h' May 24 11:04:32.596: INFO: stderr: "" May 24 11:04:32.596: INFO: stdout: "service/redis-slave created\n" May 24 11:04:32.596: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 24 11:04:32.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8j44h' May 24 11:04:32.954: INFO: stderr: "" May 24 11:04:32.954: INFO: stdout: "service/redis-master created\n" May 24 11:04:32.955: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 24 11:04:32.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8j44h' May 24 11:04:33.302: INFO: stderr: "" May 24 11:04:33.302: INFO: stdout: "service/frontend created\n" May 24 11:04:33.303: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 24 11:04:33.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8j44h' May 24 11:04:33.598: INFO: stderr: "" May 24 11:04:33.598: INFO: stdout: "deployment.extensions/frontend created\n" May 24 11:04:33.598: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 24 11:04:33.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8j44h' May 24 11:04:33.918: INFO: stderr: "" May 24 11:04:33.918: INFO: stdout: "deployment.extensions/redis-master created\n" May 24 11:04:33.918: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 24 11:04:33.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8j44h' May 24 11:04:34.261: INFO: stderr: "" May 24 11:04:34.261: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 24 11:04:34.261: INFO: Waiting for all frontend pods to be Running. May 24 11:04:44.312: INFO: Waiting for frontend to serve content. May 24 11:04:44.372: INFO: Trying to add a new entry to the guestbook. May 24 11:04:44.387: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 24 11:04:44.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8j44h' May 24 11:04:44.629: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 11:04:44.629: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 24 11:04:44.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8j44h' May 24 11:04:44.798: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 11:04:44.798: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 24 11:04:44.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8j44h' May 24 11:04:44.935: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 11:04:44.935: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 24 11:04:44.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8j44h' May 24 11:04:45.041: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 11:04:45.041: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 24 11:04:45.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8j44h' May 24 11:04:45.143: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 11:04:45.143: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 24 11:04:45.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8j44h' May 24 11:04:45.396: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 11:04:45.396: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:04:45.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8j44h" for this suite. May 24 11:05:25.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:05:25.900: INFO: namespace: e2e-tests-kubectl-8j44h, resource: bindings, ignored listing per whitelist May 24 11:05:25.900: INFO: namespace e2e-tests-kubectl-8j44h deletion completed in 40.369875536s • [SLOW TEST:53.697 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:05:25.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 24 11:05:26.047: INFO: Waiting up to 5m0s for pod "pod-78881bf5-9dae-11ea-9618-0242ac110016" in namespace "e2e-tests-emptydir-4pqdj" to be "success or failure" May 24 11:05:26.050: INFO: Pod "pod-78881bf5-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.602736ms May 24 11:05:28.054: INFO: Pod "pod-78881bf5-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007636659s May 24 11:05:30.059: INFO: Pod "pod-78881bf5-9dae-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012295483s STEP: Saw pod success May 24 11:05:30.059: INFO: Pod "pod-78881bf5-9dae-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:05:30.063: INFO: Trying to get logs from node hunter-worker2 pod pod-78881bf5-9dae-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 11:05:30.165: INFO: Waiting for pod pod-78881bf5-9dae-11ea-9618-0242ac110016 to disappear May 24 11:05:30.171: INFO: Pod pod-78881bf5-9dae-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:05:30.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4pqdj" for this suite. May 24 11:05:36.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:05:36.248: INFO: namespace: e2e-tests-emptydir-4pqdj, resource: bindings, ignored listing per whitelist May 24 11:05:36.272: INFO: namespace e2e-tests-emptydir-4pqdj deletion completed in 6.097634015s • [SLOW TEST:10.371 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:05:36.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 24 11:05:36.444: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-j5g5q,SelfLink:/api/v1/namespaces/e2e-tests-watch-j5g5q/configmaps/e2e-watch-test-label-changed,UID:7eb48bca-9dae-11ea-99e8-0242ac110002,ResourceVersion:12257931,Generation:0,CreationTimestamp:2020-05-24 11:05:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 24 11:05:36.444: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-j5g5q,SelfLink:/api/v1/namespaces/e2e-tests-watch-j5g5q/configmaps/e2e-watch-test-label-changed,UID:7eb48bca-9dae-11ea-99e8-0242ac110002,ResourceVersion:12257932,Generation:0,CreationTimestamp:2020-05-24 11:05:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 24 11:05:36.444: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-j5g5q,SelfLink:/api/v1/namespaces/e2e-tests-watch-j5g5q/configmaps/e2e-watch-test-label-changed,UID:7eb48bca-9dae-11ea-99e8-0242ac110002,ResourceVersion:12257933,Generation:0,CreationTimestamp:2020-05-24 11:05:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 24 11:05:46.484: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-j5g5q,SelfLink:/api/v1/namespaces/e2e-tests-watch-j5g5q/configmaps/e2e-watch-test-label-changed,UID:7eb48bca-9dae-11ea-99e8-0242ac110002,ResourceVersion:12257954,Generation:0,CreationTimestamp:2020-05-24 11:05:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 24 11:05:46.484: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-j5g5q,SelfLink:/api/v1/namespaces/e2e-tests-watch-j5g5q/configmaps/e2e-watch-test-label-changed,UID:7eb48bca-9dae-11ea-99e8-0242ac110002,ResourceVersion:12257955,Generation:0,CreationTimestamp:2020-05-24 11:05:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 24 11:05:46.484: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-j5g5q,SelfLink:/api/v1/namespaces/e2e-tests-watch-j5g5q/configmaps/e2e-watch-test-label-changed,UID:7eb48bca-9dae-11ea-99e8-0242ac110002,ResourceVersion:12257956,Generation:0,CreationTimestamp:2020-05-24 11:05:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:05:46.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-j5g5q" for this suite. May 24 11:05:52.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:05:52.566: INFO: namespace: e2e-tests-watch-j5g5q, resource: bindings, ignored listing per whitelist May 24 11:05:52.594: INFO: namespace e2e-tests-watch-j5g5q deletion completed in 6.092379299s • [SLOW TEST:16.322 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:05:52.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 24 11:05:52.724: INFO: Waiting up to 5m0s for pod "downward-api-886f5867-9dae-11ea-9618-0242ac110016" in namespace "e2e-tests-downward-api-vtchk" to be "success or failure" May 24 11:05:52.727: INFO: Pod "downward-api-886f5867-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.418968ms May 24 11:05:54.731: INFO: Pod "downward-api-886f5867-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007482155s May 24 11:05:56.735: INFO: Pod "downward-api-886f5867-9dae-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011067584s STEP: Saw pod success May 24 11:05:56.735: INFO: Pod "downward-api-886f5867-9dae-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:05:56.738: INFO: Trying to get logs from node hunter-worker pod downward-api-886f5867-9dae-11ea-9618-0242ac110016 container dapi-container: STEP: delete the pod May 24 11:05:56.759: INFO: Waiting for pod downward-api-886f5867-9dae-11ea-9618-0242ac110016 to disappear May 24 11:05:56.763: INFO: Pod downward-api-886f5867-9dae-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:05:56.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vtchk" for this suite. May 24 11:06:02.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:06:02.836: INFO: namespace: e2e-tests-downward-api-vtchk, resource: bindings, ignored listing per whitelist May 24 11:06:02.849: INFO: namespace e2e-tests-downward-api-vtchk deletion completed in 6.081525446s • [SLOW TEST:10.254 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:06:02.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 24 11:06:02.939: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 11:06:02.976: INFO: Waiting for terminating namespaces to be deleted... May 24 11:06:02.979: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 24 11:06:02.984: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 24 11:06:02.984: INFO: Container kube-proxy ready: true, restart count 0 May 24 11:06:02.984: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 24 11:06:02.984: INFO: Container kindnet-cni ready: true, restart count 0 May 24 11:06:02.984: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 24 11:06:02.984: INFO: Container coredns ready: true, restart count 0 May 24 11:06:02.984: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 24 11:06:02.989: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 24 11:06:02.989: INFO: Container coredns ready: true, restart count 0 May 24 11:06:02.989: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 24 11:06:02.989: INFO: Container kindnet-cni ready: true, restart count 0 May 24 11:06:02.989: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 24 11:06:02.989: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1611f22012c14449], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:06:04.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-gcch4" for this suite. May 24 11:06:10.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:06:10.117: INFO: namespace: e2e-tests-sched-pred-gcch4, resource: bindings, ignored listing per whitelist May 24 11:06:10.138: INFO: namespace e2e-tests-sched-pred-gcch4 deletion completed in 6.099314891s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.289 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:06:10.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 24 11:06:14.784: INFO: Successfully updated pod "pod-update-activedeadlineseconds-92e124e0-9dae-11ea-9618-0242ac110016" May 24 11:06:14.785: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-92e124e0-9dae-11ea-9618-0242ac110016" in namespace "e2e-tests-pods-5j26w" to be "terminated due to deadline exceeded" May 24 11:06:14.796: INFO: Pod "pod-update-activedeadlineseconds-92e124e0-9dae-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 11.271769ms May 24 11:06:16.800: INFO: Pod "pod-update-activedeadlineseconds-92e124e0-9dae-11ea-9618-0242ac110016": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.015431846s May 24 11:06:16.800: INFO: Pod "pod-update-activedeadlineseconds-92e124e0-9dae-11ea-9618-0242ac110016" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:06:16.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-5j26w" for this suite. May 24 11:06:22.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:06:22.841: INFO: namespace: e2e-tests-pods-5j26w, resource: bindings, ignored listing per whitelist May 24 11:06:22.896: INFO: namespace e2e-tests-pods-5j26w deletion completed in 6.090843105s • [SLOW TEST:12.758 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:06:22.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 11:06:23.031: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a7f889a-9dae-11ea-9618-0242ac110016" in namespace "e2e-tests-downward-api-gcw2t" to be "success or failure" May 24 11:06:23.034: INFO: Pod "downwardapi-volume-9a7f889a-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.302595ms May 24 11:06:25.038: INFO: Pod "downwardapi-volume-9a7f889a-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007263547s May 24 11:06:27.043: INFO: Pod "downwardapi-volume-9a7f889a-9dae-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011985243s STEP: Saw pod success May 24 11:06:27.043: INFO: Pod "downwardapi-volume-9a7f889a-9dae-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:06:27.047: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-9a7f889a-9dae-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 11:06:27.099: INFO: Waiting for pod downwardapi-volume-9a7f889a-9dae-11ea-9618-0242ac110016 to disappear May 24 11:06:27.106: INFO: Pod downwardapi-volume-9a7f889a-9dae-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:06:27.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gcw2t" for this suite. May 24 11:06:33.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:06:33.177: INFO: namespace: e2e-tests-downward-api-gcw2t, resource: bindings, ignored listing per whitelist May 24 11:06:33.206: INFO: namespace e2e-tests-downward-api-gcw2t deletion completed in 6.096295563s • [SLOW TEST:10.310 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:06:33.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-t96hp in namespace e2e-tests-proxy-flv48 I0524 11:06:33.503837 6 runners.go:184] Created replication controller with name: proxy-service-t96hp, namespace: e2e-tests-proxy-flv48, replica count: 1 I0524 11:06:34.554229 6 runners.go:184] proxy-service-t96hp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 11:06:35.554382 6 runners.go:184] proxy-service-t96hp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 11:06:36.554635 6 runners.go:184] proxy-service-t96hp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 11:06:37.554853 6 runners.go:184] proxy-service-t96hp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 11:06:38.555131 6 runners.go:184] proxy-service-t96hp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 11:06:39.555329 6 runners.go:184] proxy-service-t96hp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 11:06:40.555609 6 runners.go:184] proxy-service-t96hp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 11:06:41.555822 6 runners.go:184] proxy-service-t96hp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 11:06:42.556090 6 runners.go:184] proxy-service-t96hp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 11:06:43.556314 6 runners.go:184] proxy-service-t96hp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 11:06:44.556616 6 runners.go:184] proxy-service-t96hp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 11:06:45.556874 6 runners.go:184] proxy-service-t96hp Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 11:06:45.560: INFO: setup took 12.142345507s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 24 11:06:45.566: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-flv48/pods/proxy-service-t96hp-dzxrc:162/proxy/: bar (200; 5.614514ms) May 24 11:06:45.570: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-flv48/pods/proxy-service-t96hp-dzxrc/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-af6868e8-9dae-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 11:06:58.118: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-af6ad98c-9dae-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-94xlw" to be "success or failure" May 24 11:06:58.122: INFO: Pod "pod-projected-configmaps-af6ad98c-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.742157ms May 24 11:07:00.125: INFO: Pod "pod-projected-configmaps-af6ad98c-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007551996s May 24 11:07:02.196: INFO: Pod "pod-projected-configmaps-af6ad98c-9dae-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077939943s STEP: Saw pod success May 24 11:07:02.196: INFO: Pod "pod-projected-configmaps-af6ad98c-9dae-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:07:02.199: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-af6ad98c-9dae-11ea-9618-0242ac110016 container projected-configmap-volume-test: STEP: delete the pod May 24 11:07:02.377: INFO: Waiting for pod pod-projected-configmaps-af6ad98c-9dae-11ea-9618-0242ac110016 to disappear May 24 11:07:02.385: INFO: Pod pod-projected-configmaps-af6ad98c-9dae-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:07:02.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-94xlw" for this suite. May 24 11:07:08.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:07:08.442: INFO: namespace: e2e-tests-projected-94xlw, resource: bindings, ignored listing per whitelist May 24 11:07:08.498: INFO: namespace e2e-tests-projected-94xlw deletion completed in 6.109237319s • [SLOW TEST:10.499 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:07:08.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:07:12.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-bcd9d" for this suite. May 24 11:07:18.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:07:18.861: INFO: namespace: e2e-tests-emptydir-wrapper-bcd9d, resource: bindings, ignored listing per whitelist May 24 11:07:18.894: INFO: namespace e2e-tests-emptydir-wrapper-bcd9d deletion completed in 6.088609463s • [SLOW TEST:10.397 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:07:18.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 11:07:19.029: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbe13c1f-9dae-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-cwr7s" to be "success or failure" May 24 11:07:19.041: INFO: Pod "downwardapi-volume-bbe13c1f-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 11.490967ms May 24 11:07:21.045: INFO: Pod "downwardapi-volume-bbe13c1f-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015762442s May 24 11:07:23.049: INFO: Pod "downwardapi-volume-bbe13c1f-9dae-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019630652s STEP: Saw pod success May 24 11:07:23.049: INFO: Pod "downwardapi-volume-bbe13c1f-9dae-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:07:23.052: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-bbe13c1f-9dae-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 11:07:23.070: INFO: Waiting for pod downwardapi-volume-bbe13c1f-9dae-11ea-9618-0242ac110016 to disappear May 24 11:07:23.105: INFO: Pod downwardapi-volume-bbe13c1f-9dae-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:07:23.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cwr7s" for this suite. May 24 11:07:29.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:07:29.154: INFO: namespace: e2e-tests-projected-cwr7s, resource: bindings, ignored listing per whitelist May 24 11:07:29.195: INFO: namespace e2e-tests-projected-cwr7s deletion completed in 6.08702611s • [SLOW TEST:10.301 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:07:29.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-v25n2 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v25n2 to expose endpoints map[] May 24 11:07:29.386: INFO: Get endpoints failed (14.892235ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 24 11:07:30.389: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v25n2 exposes endpoints map[] (1.018614274s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-v25n2 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v25n2 to expose endpoints map[pod1:[80]] May 24 11:07:34.473: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v25n2 exposes endpoints map[pod1:[80]] (4.076309185s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-v25n2 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v25n2 to expose endpoints map[pod1:[80] pod2:[80]] May 24 11:07:38.550: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v25n2 exposes endpoints map[pod1:[80] pod2:[80]] (4.072325261s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-v25n2 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v25n2 to expose endpoints map[pod2:[80]] May 24 11:07:39.598: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v25n2 exposes endpoints map[pod2:[80]] (1.044026092s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-v25n2 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v25n2 to expose endpoints map[] May 24 11:07:40.624: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v25n2 exposes endpoints map[] (1.020382168s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:07:40.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-v25n2" for this suite. May 24 11:08:02.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:08:02.730: INFO: namespace: e2e-tests-services-v25n2, resource: bindings, ignored listing per whitelist May 24 11:08:02.798: INFO: namespace e2e-tests-services-v25n2 deletion completed in 22.087408568s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:33.602 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:08:02.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-qzhc STEP: Creating a pod to test atomic-volume-subpath May 24 11:08:02.922: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qzhc" in namespace "e2e-tests-subpath-xhbt7" to be "success or failure" May 24 11:08:02.939: INFO: Pod "pod-subpath-test-configmap-qzhc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.315161ms May 24 11:08:05.070: INFO: Pod "pod-subpath-test-configmap-qzhc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148263305s May 24 11:08:07.118: INFO: Pod "pod-subpath-test-configmap-qzhc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196502142s May 24 11:08:09.123: INFO: Pod "pod-subpath-test-configmap-qzhc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200956608s May 24 11:08:11.126: INFO: Pod "pod-subpath-test-configmap-qzhc": Phase="Running", Reason="", readiness=false. Elapsed: 8.204385528s May 24 11:08:13.131: INFO: Pod "pod-subpath-test-configmap-qzhc": Phase="Running", Reason="", readiness=false. Elapsed: 10.209245419s May 24 11:08:15.135: INFO: Pod "pod-subpath-test-configmap-qzhc": Phase="Running", Reason="", readiness=false. Elapsed: 12.213467177s May 24 11:08:17.140: INFO: Pod "pod-subpath-test-configmap-qzhc": Phase="Running", Reason="", readiness=false. Elapsed: 14.21802936s May 24 11:08:19.144: INFO: Pod "pod-subpath-test-configmap-qzhc": Phase="Running", Reason="", readiness=false. Elapsed: 16.222530152s May 24 11:08:21.149: INFO: Pod "pod-subpath-test-configmap-qzhc": Phase="Running", Reason="", readiness=false. Elapsed: 18.226769005s May 24 11:08:23.154: INFO: Pod "pod-subpath-test-configmap-qzhc": Phase="Running", Reason="", readiness=false. Elapsed: 20.231754157s May 24 11:08:25.158: INFO: Pod "pod-subpath-test-configmap-qzhc": Phase="Running", Reason="", readiness=false. Elapsed: 22.236339271s May 24 11:08:27.162: INFO: Pod "pod-subpath-test-configmap-qzhc": Phase="Running", Reason="", readiness=false. Elapsed: 24.240269605s May 24 11:08:29.166: INFO: Pod "pod-subpath-test-configmap-qzhc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.244634023s STEP: Saw pod success May 24 11:08:29.167: INFO: Pod "pod-subpath-test-configmap-qzhc" satisfied condition "success or failure" May 24 11:08:29.170: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-qzhc container test-container-subpath-configmap-qzhc: STEP: delete the pod May 24 11:08:29.208: INFO: Waiting for pod pod-subpath-test-configmap-qzhc to disappear May 24 11:08:29.225: INFO: Pod pod-subpath-test-configmap-qzhc no longer exists STEP: Deleting pod pod-subpath-test-configmap-qzhc May 24 11:08:29.225: INFO: Deleting pod "pod-subpath-test-configmap-qzhc" in namespace "e2e-tests-subpath-xhbt7" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:08:29.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-xhbt7" for this suite. May 24 11:08:35.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:08:35.436: INFO: namespace: e2e-tests-subpath-xhbt7, resource: bindings, ignored listing per whitelist May 24 11:08:35.496: INFO: namespace e2e-tests-subpath-xhbt7 deletion completed in 6.266097562s • [SLOW TEST:32.698 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:08:35.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 24 11:08:36.131: INFO: Waiting up to 5m0s for pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-fjbdl" in namespace "e2e-tests-svcaccounts-cr79m" to be "success or failure" May 24 11:08:36.135: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-fjbdl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.592544ms May 24 11:08:38.139: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-fjbdl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008502495s May 24 11:08:40.221: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-fjbdl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089870557s May 24 11:08:42.225: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-fjbdl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093791597s STEP: Saw pod success May 24 11:08:42.225: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-fjbdl" satisfied condition "success or failure" May 24 11:08:42.227: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-fjbdl container token-test: STEP: delete the pod May 24 11:08:42.432: INFO: Waiting for pod pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-fjbdl to disappear May 24 11:08:42.515: INFO: Pod pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-fjbdl no longer exists STEP: Creating a pod to test consume service account root CA May 24 11:08:42.592: INFO: Waiting up to 5m0s for pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-qngpq" in namespace "e2e-tests-svcaccounts-cr79m" to be "success or failure" May 24 11:08:42.595: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-qngpq": Phase="Pending", Reason="", readiness=false. Elapsed: 3.030948ms May 24 11:08:44.599: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-qngpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007009133s May 24 11:08:46.603: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-qngpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010924396s May 24 11:08:48.607: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-qngpq": Phase="Running", Reason="", readiness=false. Elapsed: 6.015252373s May 24 11:08:50.611: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-qngpq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019557388s STEP: Saw pod success May 24 11:08:50.611: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-qngpq" satisfied condition "success or failure" May 24 11:08:50.614: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-qngpq container root-ca-test: STEP: delete the pod May 24 11:08:50.656: INFO: Waiting for pod pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-qngpq to disappear May 24 11:08:50.669: INFO: Pod pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-qngpq no longer exists STEP: Creating a pod to test consume service account namespace May 24 11:08:50.673: INFO: Waiting up to 5m0s for pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-mvcx6" in namespace "e2e-tests-svcaccounts-cr79m" to be "success or failure" May 24 11:08:50.724: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-mvcx6": Phase="Pending", Reason="", readiness=false. Elapsed: 50.945722ms May 24 11:08:52.728: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-mvcx6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054947187s May 24 11:08:54.831: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-mvcx6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157950203s May 24 11:08:56.835: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-mvcx6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.161997107s STEP: Saw pod success May 24 11:08:56.835: INFO: Pod "pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-mvcx6" satisfied condition "success or failure" May 24 11:08:56.838: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-mvcx6 container namespace-test: STEP: delete the pod May 24 11:08:56.946: INFO: Waiting for pod pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-mvcx6 to disappear May 24 11:08:56.950: INFO: Pod pod-service-account-e9d69bed-9dae-11ea-9618-0242ac110016-mvcx6 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:08:56.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-cr79m" for this suite. May 24 11:09:02.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:09:03.036: INFO: namespace: e2e-tests-svcaccounts-cr79m, resource: bindings, ignored listing per whitelist May 24 11:09:03.057: INFO: namespace e2e-tests-svcaccounts-cr79m deletion completed in 6.104066674s • [SLOW TEST:27.561 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:09:03.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f9f47c24-9dae-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 11:09:03.190: INFO: Waiting up to 5m0s for pod "pod-secrets-f9f6fa09-9dae-11ea-9618-0242ac110016" in namespace "e2e-tests-secrets-d46z8" to be "success or failure" May 24 11:09:03.194: INFO: Pod "pod-secrets-f9f6fa09-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061758ms May 24 11:09:05.198: INFO: Pod "pod-secrets-f9f6fa09-9dae-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007954532s May 24 11:09:07.221: INFO: Pod "pod-secrets-f9f6fa09-9dae-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030503988s STEP: Saw pod success May 24 11:09:07.221: INFO: Pod "pod-secrets-f9f6fa09-9dae-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:09:07.224: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-f9f6fa09-9dae-11ea-9618-0242ac110016 container secret-volume-test: STEP: delete the pod May 24 11:09:07.253: INFO: Waiting for pod pod-secrets-f9f6fa09-9dae-11ea-9618-0242ac110016 to disappear May 24 11:09:07.318: INFO: Pod pod-secrets-f9f6fa09-9dae-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:09:07.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-d46z8" for this suite. May 24 11:09:13.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:09:13.431: INFO: namespace: e2e-tests-secrets-d46z8, resource: bindings, ignored listing per whitelist May 24 11:09:13.441: INFO: namespace e2e-tests-secrets-d46z8 deletion completed in 6.093810775s • [SLOW TEST:10.383 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:09:13.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0524 11:09:54.308425 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 11:09:54.308: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:09:54.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-79kqs" for this suite. May 24 11:10:02.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:10:02.563: INFO: namespace: e2e-tests-gc-79kqs, resource: bindings, ignored listing per whitelist May 24 11:10:02.571: INFO: namespace e2e-tests-gc-79kqs deletion completed in 8.258611602s • [SLOW TEST:49.130 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:10:02.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-1d9d6e56-9daf-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 11:10:03.050: INFO: Waiting up to 5m0s for pod "pod-configmaps-1da09a4b-9daf-11ea-9618-0242ac110016" in namespace "e2e-tests-configmap-v9clh" to be "success or failure" May 24 11:10:03.311: INFO: Pod "pod-configmaps-1da09a4b-9daf-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 261.106032ms May 24 11:10:05.371: INFO: Pod "pod-configmaps-1da09a4b-9daf-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321074598s May 24 11:10:07.375: INFO: Pod "pod-configmaps-1da09a4b-9daf-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 4.325342871s May 24 11:10:09.379: INFO: Pod "pod-configmaps-1da09a4b-9daf-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.329022759s STEP: Saw pod success May 24 11:10:09.379: INFO: Pod "pod-configmaps-1da09a4b-9daf-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:10:09.381: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-1da09a4b-9daf-11ea-9618-0242ac110016 container configmap-volume-test: STEP: delete the pod May 24 11:10:09.396: INFO: Waiting for pod pod-configmaps-1da09a4b-9daf-11ea-9618-0242ac110016 to disappear May 24 11:10:09.401: INFO: Pod pod-configmaps-1da09a4b-9daf-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:10:09.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-v9clh" for this suite. May 24 11:10:15.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:10:15.428: INFO: namespace: e2e-tests-configmap-v9clh, resource: bindings, ignored listing per whitelist May 24 11:10:15.498: INFO: namespace e2e-tests-configmap-v9clh deletion completed in 6.094818062s • [SLOW TEST:12.927 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:10:15.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 11:10:15.640: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 24 11:10:15.644: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-l2g59/daemonsets","resourceVersion":"12259124"},"items":null} May 24 11:10:15.645: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-l2g59/pods","resourceVersion":"12259124"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:10:15.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-l2g59" for this suite. May 24 11:10:21.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:10:21.707: INFO: namespace: e2e-tests-daemonsets-l2g59, resource: bindings, ignored listing per whitelist May 24 11:10:21.738: INFO: namespace e2e-tests-daemonsets-l2g59 deletion completed in 6.084297808s S [SKIPPING] [6.240 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 11:10:15.640: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:10:21.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 11:10:21.862: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 24 11:10:26.866: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 24 11:10:26.866: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 24 11:10:28.871: INFO: Creating deployment "test-rollover-deployment" May 24 11:10:28.897: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 24 11:10:30.903: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 24 11:10:30.910: INFO: Ensure that both replica sets have 1 created replica May 24 11:10:30.915: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 24 11:10:30.921: INFO: Updating deployment test-rollover-deployment May 24 11:10:30.921: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 24 11:10:32.933: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 24 11:10:32.940: INFO: Make sure deployment "test-rollover-deployment" is complete May 24 11:10:32.947: INFO: all replica sets need to contain the pod-template-hash label May 24 11:10:32.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915431, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 11:10:34.955: INFO: all replica sets need to contain the pod-template-hash label May 24 11:10:34.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915434, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 11:10:36.955: INFO: all replica sets need to contain the pod-template-hash label May 24 11:10:36.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915434, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 11:10:38.954: INFO: all replica sets need to contain the pod-template-hash label May 24 11:10:38.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915434, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 11:10:40.962: INFO: all replica sets need to contain the pod-template-hash label May 24 11:10:40.962: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915434, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 11:10:42.957: INFO: all replica sets need to contain the pod-template-hash label May 24 11:10:42.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915434, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915428, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 11:10:44.952: INFO: May 24 11:10:44.952: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 24 11:10:44.957: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-k6485,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k6485/deployments/test-rollover-deployment,UID:2d0a94aa-9daf-11ea-99e8-0242ac110002,ResourceVersion:12259258,Generation:2,CreationTimestamp:2020-05-24 11:10:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-24 11:10:28 +0000 UTC 2020-05-24 11:10:28 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-24 11:10:44 +0000 UTC 2020-05-24 11:10:28 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 24 11:10:44.959: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-k6485,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k6485/replicasets/test-rollover-deployment-5b8479fdb6,UID:2e435b94-9daf-11ea-99e8-0242ac110002,ResourceVersion:12259249,Generation:2,CreationTimestamp:2020-05-24 11:10:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2d0a94aa-9daf-11ea-99e8-0242ac110002 0xc00234a317 0xc00234a318}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 24 11:10:44.959: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 24 11:10:44.959: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-k6485,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k6485/replicasets/test-rollover-controller,UID:28d8fbef-9daf-11ea-99e8-0242ac110002,ResourceVersion:12259257,Generation:2,CreationTimestamp:2020-05-24 11:10:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2d0a94aa-9daf-11ea-99e8-0242ac110002 0xc00234a0af 0xc00234a0c0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 24 11:10:44.959: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-k6485,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k6485/replicasets/test-rollover-deployment-58494b7559,UID:2d0f9c43-9daf-11ea-99e8-0242ac110002,ResourceVersion:12259215,Generation:2,CreationTimestamp:2020-05-24 11:10:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2d0a94aa-9daf-11ea-99e8-0242ac110002 0xc00234a187 0xc00234a188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 24 11:10:44.960: INFO: Pod "test-rollover-deployment-5b8479fdb6-lq9kc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-lq9kc,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-k6485,SelfLink:/api/v1/namespaces/e2e-tests-deployment-k6485/pods/test-rollover-deployment-5b8479fdb6-lq9kc,UID:2e57b691-9daf-11ea-99e8-0242ac110002,ResourceVersion:12259227,Generation:0,CreationTimestamp:2020-05-24 11:10:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 2e435b94-9daf-11ea-99e8-0242ac110002 0xc001a05277 0xc001a05278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9qwnh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qwnh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-9qwnh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a052f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a05310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:10:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:10:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:10:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:10:31 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.40,StartTime:2020-05-24 11:10:31 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-24 11:10:33 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://8044deed46632238858ef45bb71b92d0010835f6c10747d2b672cc6528fb2980}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:10:44.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-k6485" for this suite. May 24 11:10:50.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:10:51.029: INFO: namespace: e2e-tests-deployment-k6485, resource: bindings, ignored listing per whitelist May 24 11:10:51.061: INFO: namespace e2e-tests-deployment-k6485 deletion completed in 6.099002533s • [SLOW TEST:29.323 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:10:51.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 24 11:10:51.237: INFO: Waiting up to 5m0s for pod "pod-3a58db9a-9daf-11ea-9618-0242ac110016" in namespace "e2e-tests-emptydir-zns22" to be "success or failure" May 24 11:10:51.244: INFO: Pod "pod-3a58db9a-9daf-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 7.426031ms May 24 11:10:53.270: INFO: Pod "pod-3a58db9a-9daf-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033754682s May 24 11:10:55.275: INFO: Pod "pod-3a58db9a-9daf-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037991259s STEP: Saw pod success May 24 11:10:55.275: INFO: Pod "pod-3a58db9a-9daf-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:10:55.277: INFO: Trying to get logs from node hunter-worker2 pod pod-3a58db9a-9daf-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 11:10:55.318: INFO: Waiting for pod pod-3a58db9a-9daf-11ea-9618-0242ac110016 to disappear May 24 11:10:55.361: INFO: Pod pod-3a58db9a-9daf-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:10:55.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zns22" for this suite. May 24 11:11:01.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:11:01.980: INFO: namespace: e2e-tests-emptydir-zns22, resource: bindings, ignored listing per whitelist May 24 11:11:02.006: INFO: namespace e2e-tests-emptydir-zns22 deletion completed in 6.640584658s • [SLOW TEST:10.944 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:11:02.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 24 11:11:02.111: INFO: Waiting up to 5m0s for pod "downward-api-40d7e7bd-9daf-11ea-9618-0242ac110016" in namespace "e2e-tests-downward-api-ss55r" to be "success or failure" May 24 11:11:02.115: INFO: Pod "downward-api-40d7e7bd-9daf-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.920034ms May 24 11:11:04.119: INFO: Pod "downward-api-40d7e7bd-9daf-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007634251s May 24 11:11:06.123: INFO: Pod "downward-api-40d7e7bd-9daf-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011826277s STEP: Saw pod success May 24 11:11:06.123: INFO: Pod "downward-api-40d7e7bd-9daf-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:11:06.126: INFO: Trying to get logs from node hunter-worker2 pod downward-api-40d7e7bd-9daf-11ea-9618-0242ac110016 container dapi-container: STEP: delete the pod May 24 11:11:06.152: INFO: Waiting for pod downward-api-40d7e7bd-9daf-11ea-9618-0242ac110016 to disappear May 24 11:11:06.169: INFO: Pod downward-api-40d7e7bd-9daf-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:11:06.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ss55r" for this suite. May 24 11:11:12.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:11:12.214: INFO: namespace: e2e-tests-downward-api-ss55r, resource: bindings, ignored listing per whitelist May 24 11:11:12.284: INFO: namespace e2e-tests-downward-api-ss55r deletion completed in 6.11142246s • [SLOW TEST:10.278 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:11:12.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 24 11:11:12.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jnw5v' May 24 11:11:12.683: INFO: stderr: "" May 24 11:11:12.683: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 11:11:12.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jnw5v' May 24 11:11:12.845: INFO: stderr: "" May 24 11:11:12.846: INFO: stdout: "update-demo-nautilus-hq9rk update-demo-nautilus-sh9k9 " May 24 11:11:12.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hq9rk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnw5v' May 24 11:11:12.952: INFO: stderr: "" May 24 11:11:12.952: INFO: stdout: "" May 24 11:11:12.952: INFO: update-demo-nautilus-hq9rk is created but not running May 24 11:11:17.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jnw5v' May 24 11:11:18.055: INFO: stderr: "" May 24 11:11:18.055: INFO: stdout: "update-demo-nautilus-hq9rk update-demo-nautilus-sh9k9 " May 24 11:11:18.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hq9rk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnw5v' May 24 11:11:18.159: INFO: stderr: "" May 24 11:11:18.159: INFO: stdout: "true" May 24 11:11:18.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hq9rk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnw5v' May 24 11:11:18.258: INFO: stderr: "" May 24 11:11:18.258: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 11:11:18.259: INFO: validating pod update-demo-nautilus-hq9rk May 24 11:11:18.282: INFO: got data: { "image": "nautilus.jpg" } May 24 11:11:18.282: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 11:11:18.282: INFO: update-demo-nautilus-hq9rk is verified up and running May 24 11:11:18.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sh9k9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnw5v' May 24 11:11:18.383: INFO: stderr: "" May 24 11:11:18.383: INFO: stdout: "true" May 24 11:11:18.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sh9k9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnw5v' May 24 11:11:18.488: INFO: stderr: "" May 24 11:11:18.488: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 11:11:18.488: INFO: validating pod update-demo-nautilus-sh9k9 May 24 11:11:18.494: INFO: got data: { "image": "nautilus.jpg" } May 24 11:11:18.494: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 11:11:18.494: INFO: update-demo-nautilus-sh9k9 is verified up and running STEP: using delete to clean up resources May 24 11:11:18.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jnw5v' May 24 11:11:18.603: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 11:11:18.603: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 24 11:11:18.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-jnw5v' May 24 11:11:18.702: INFO: stderr: "No resources found.\n" May 24 11:11:18.702: INFO: stdout: "" May 24 11:11:18.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-jnw5v -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 24 11:11:18.909: INFO: stderr: "" May 24 11:11:18.909: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:11:18.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jnw5v" for this suite. May 24 11:11:41.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:11:41.203: INFO: namespace: e2e-tests-kubectl-jnw5v, resource: bindings, ignored listing per whitelist May 24 11:11:41.257: INFO: namespace e2e-tests-kubectl-jnw5v deletion completed in 22.176852147s • [SLOW TEST:28.972 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:11:41.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 24 11:11:41.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vpdl4' May 24 11:11:41.705: INFO: stderr: "" May 24 11:11:41.705: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 11:11:41.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vpdl4' May 24 11:11:41.877: INFO: stderr: "" May 24 11:11:41.877: INFO: stdout: "update-demo-nautilus-qgpkk update-demo-nautilus-r5fcv " May 24 11:11:41.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qgpkk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vpdl4' May 24 11:11:41.985: INFO: stderr: "" May 24 11:11:41.985: INFO: stdout: "" May 24 11:11:41.985: INFO: update-demo-nautilus-qgpkk is created but not running May 24 11:11:46.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vpdl4' May 24 11:11:47.101: INFO: stderr: "" May 24 11:11:47.101: INFO: stdout: "update-demo-nautilus-qgpkk update-demo-nautilus-r5fcv " May 24 11:11:47.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qgpkk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vpdl4' May 24 11:11:47.210: INFO: stderr: "" May 24 11:11:47.210: INFO: stdout: "true" May 24 11:11:47.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qgpkk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vpdl4' May 24 11:11:47.317: INFO: stderr: "" May 24 11:11:47.317: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 11:11:47.317: INFO: validating pod update-demo-nautilus-qgpkk May 24 11:11:47.322: INFO: got data: { "image": "nautilus.jpg" } May 24 11:11:47.322: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 11:11:47.322: INFO: update-demo-nautilus-qgpkk is verified up and running May 24 11:11:47.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r5fcv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vpdl4' May 24 11:11:47.423: INFO: stderr: "" May 24 11:11:47.423: INFO: stdout: "true" May 24 11:11:47.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r5fcv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vpdl4' May 24 11:11:47.517: INFO: stderr: "" May 24 11:11:47.517: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 11:11:47.517: INFO: validating pod update-demo-nautilus-r5fcv May 24 11:11:47.520: INFO: got data: { "image": "nautilus.jpg" } May 24 11:11:47.520: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 11:11:47.520: INFO: update-demo-nautilus-r5fcv is verified up and running STEP: rolling-update to new replication controller May 24 11:11:47.522: INFO: scanned /root for discovery docs: May 24 11:11:47.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-vpdl4' May 24 11:12:10.029: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 24 11:12:10.029: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 11:12:10.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vpdl4' May 24 11:12:10.135: INFO: stderr: "" May 24 11:12:10.135: INFO: stdout: "update-demo-kitten-ql6nb update-demo-kitten-tzqvl update-demo-nautilus-qgpkk " STEP: Replicas for name=update-demo: expected=2 actual=3 May 24 11:12:15.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vpdl4' May 24 11:12:15.237: INFO: stderr: "" May 24 11:12:15.237: INFO: stdout: "update-demo-kitten-ql6nb update-demo-kitten-tzqvl " May 24 11:12:15.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ql6nb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vpdl4' May 24 11:12:15.343: INFO: stderr: "" May 24 11:12:15.343: INFO: stdout: "true" May 24 11:12:15.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ql6nb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vpdl4' May 24 11:12:15.439: INFO: stderr: "" May 24 11:12:15.439: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 24 11:12:15.439: INFO: validating pod update-demo-kitten-ql6nb May 24 11:12:15.448: INFO: got data: { "image": "kitten.jpg" } May 24 11:12:15.449: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 24 11:12:15.449: INFO: update-demo-kitten-ql6nb is verified up and running May 24 11:12:15.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tzqvl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vpdl4' May 24 11:12:15.554: INFO: stderr: "" May 24 11:12:15.554: INFO: stdout: "true" May 24 11:12:15.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tzqvl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vpdl4' May 24 11:12:15.660: INFO: stderr: "" May 24 11:12:15.660: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 24 11:12:15.660: INFO: validating pod update-demo-kitten-tzqvl May 24 11:12:15.665: INFO: got data: { "image": "kitten.jpg" } May 24 11:12:15.665: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 24 11:12:15.665: INFO: update-demo-kitten-tzqvl is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:12:15.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vpdl4" for this suite. May 24 11:12:39.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:12:39.750: INFO: namespace: e2e-tests-kubectl-vpdl4, resource: bindings, ignored listing per whitelist May 24 11:12:39.809: INFO: namespace e2e-tests-kubectl-vpdl4 deletion completed in 24.141239987s • [SLOW TEST:58.553 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:12:39.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 24 11:12:44.061: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:13:08.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-pc2k9" for this suite. May 24 11:13:14.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:13:14.248: INFO: namespace: e2e-tests-namespaces-pc2k9, resource: bindings, ignored listing per whitelist May 24 11:13:14.267: INFO: namespace e2e-tests-namespaces-pc2k9 deletion completed in 6.102102248s STEP: Destroying namespace "e2e-tests-nsdeletetest-vkstk" for this suite. May 24 11:13:14.269: INFO: Namespace e2e-tests-nsdeletetest-vkstk was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-h6fw7" for this suite. May 24 11:13:20.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:13:20.316: INFO: namespace: e2e-tests-nsdeletetest-h6fw7, resource: bindings, ignored listing per whitelist May 24 11:13:20.364: INFO: namespace e2e-tests-nsdeletetest-h6fw7 deletion completed in 6.094248702s • [SLOW TEST:40.554 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:13:20.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 24 11:13:20.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:20.778: INFO: stderr: "" May 24 11:13:20.778: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 11:13:20.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:20.907: INFO: stderr: "" May 24 11:13:20.907: INFO: stdout: "update-demo-nautilus-rvlg5 update-demo-nautilus-tvb55 " May 24 11:13:20.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvlg5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:20.996: INFO: stderr: "" May 24 11:13:20.996: INFO: stdout: "" May 24 11:13:20.996: INFO: update-demo-nautilus-rvlg5 is created but not running May 24 11:13:25.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:26.092: INFO: stderr: "" May 24 11:13:26.092: INFO: stdout: "update-demo-nautilus-rvlg5 update-demo-nautilus-tvb55 " May 24 11:13:26.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvlg5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:26.191: INFO: stderr: "" May 24 11:13:26.191: INFO: stdout: "true" May 24 11:13:26.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvlg5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:26.287: INFO: stderr: "" May 24 11:13:26.287: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 11:13:26.287: INFO: validating pod update-demo-nautilus-rvlg5 May 24 11:13:26.291: INFO: got data: { "image": "nautilus.jpg" } May 24 11:13:26.291: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 11:13:26.291: INFO: update-demo-nautilus-rvlg5 is verified up and running May 24 11:13:26.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tvb55 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:26.389: INFO: stderr: "" May 24 11:13:26.389: INFO: stdout: "true" May 24 11:13:26.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tvb55 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:26.491: INFO: stderr: "" May 24 11:13:26.492: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 11:13:26.492: INFO: validating pod update-demo-nautilus-tvb55 May 24 11:13:26.496: INFO: got data: { "image": "nautilus.jpg" } May 24 11:13:26.496: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 11:13:26.496: INFO: update-demo-nautilus-tvb55 is verified up and running STEP: scaling down the replication controller May 24 11:13:26.498: INFO: scanned /root for discovery docs: May 24 11:13:26.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:27.657: INFO: stderr: "" May 24 11:13:27.657: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 11:13:27.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:27.778: INFO: stderr: "" May 24 11:13:27.778: INFO: stdout: "update-demo-nautilus-rvlg5 update-demo-nautilus-tvb55 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 24 11:13:32.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:32.892: INFO: stderr: "" May 24 11:13:32.892: INFO: stdout: "update-demo-nautilus-rvlg5 " May 24 11:13:32.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvlg5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:32.999: INFO: stderr: "" May 24 11:13:32.999: INFO: stdout: "true" May 24 11:13:32.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvlg5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:33.101: INFO: stderr: "" May 24 11:13:33.101: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 11:13:33.101: INFO: validating pod update-demo-nautilus-rvlg5 May 24 11:13:33.104: INFO: got data: { "image": "nautilus.jpg" } May 24 11:13:33.104: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 11:13:33.104: INFO: update-demo-nautilus-rvlg5 is verified up and running STEP: scaling up the replication controller May 24 11:13:33.106: INFO: scanned /root for discovery docs: May 24 11:13:33.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:34.244: INFO: stderr: "" May 24 11:13:34.244: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 11:13:34.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:34.342: INFO: stderr: "" May 24 11:13:34.342: INFO: stdout: "update-demo-nautilus-nxvnm update-demo-nautilus-rvlg5 " May 24 11:13:34.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nxvnm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:34.452: INFO: stderr: "" May 24 11:13:34.453: INFO: stdout: "" May 24 11:13:34.453: INFO: update-demo-nautilus-nxvnm is created but not running May 24 11:13:39.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:39.552: INFO: stderr: "" May 24 11:13:39.552: INFO: stdout: "update-demo-nautilus-nxvnm update-demo-nautilus-rvlg5 " May 24 11:13:39.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nxvnm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:39.648: INFO: stderr: "" May 24 11:13:39.648: INFO: stdout: "true" May 24 11:13:39.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nxvnm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:39.739: INFO: stderr: "" May 24 11:13:39.739: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 11:13:39.739: INFO: validating pod update-demo-nautilus-nxvnm May 24 11:13:39.743: INFO: got data: { "image": "nautilus.jpg" } May 24 11:13:39.743: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 11:13:39.743: INFO: update-demo-nautilus-nxvnm is verified up and running May 24 11:13:39.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvlg5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:39.866: INFO: stderr: "" May 24 11:13:39.866: INFO: stdout: "true" May 24 11:13:39.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvlg5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:39.963: INFO: stderr: "" May 24 11:13:39.963: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 11:13:39.963: INFO: validating pod update-demo-nautilus-rvlg5 May 24 11:13:39.979: INFO: got data: { "image": "nautilus.jpg" } May 24 11:13:39.979: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 11:13:39.979: INFO: update-demo-nautilus-rvlg5 is verified up and running STEP: using delete to clean up resources May 24 11:13:39.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:40.109: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 11:13:40.109: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 24 11:13:40.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-mh27z' May 24 11:13:40.409: INFO: stderr: "No resources found.\n" May 24 11:13:40.409: INFO: stdout: "" May 24 11:13:40.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-mh27z -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 24 11:13:40.549: INFO: stderr: "" May 24 11:13:40.549: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:13:40.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mh27z" for this suite. May 24 11:14:02.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:14:02.613: INFO: namespace: e2e-tests-kubectl-mh27z, resource: bindings, ignored listing per whitelist May 24 11:14:02.640: INFO: namespace e2e-tests-kubectl-mh27z deletion completed in 22.087792698s • [SLOW TEST:42.276 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:14:02.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 24 11:14:02.754: INFO: Waiting up to 5m0s for pod "client-containers-ac82e19c-9daf-11ea-9618-0242ac110016" in namespace "e2e-tests-containers-2r5zt" to be "success or failure" May 24 11:14:02.766: INFO: Pod "client-containers-ac82e19c-9daf-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 11.975698ms May 24 11:14:04.770: INFO: Pod "client-containers-ac82e19c-9daf-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016327791s May 24 11:14:06.774: INFO: Pod "client-containers-ac82e19c-9daf-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02066045s STEP: Saw pod success May 24 11:14:06.774: INFO: Pod "client-containers-ac82e19c-9daf-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:14:06.777: INFO: Trying to get logs from node hunter-worker2 pod client-containers-ac82e19c-9daf-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 11:14:06.809: INFO: Waiting for pod client-containers-ac82e19c-9daf-11ea-9618-0242ac110016 to disappear May 24 11:14:06.872: INFO: Pod client-containers-ac82e19c-9daf-11ea-9618-0242ac110016 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:14:06.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-2r5zt" for this suite. May 24 11:14:12.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:14:12.911: INFO: namespace: e2e-tests-containers-2r5zt, resource: bindings, ignored listing per whitelist May 24 11:14:12.969: INFO: namespace e2e-tests-containers-2r5zt deletion completed in 6.093648705s • [SLOW TEST:10.329 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:14:12.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-b2ae21ec-9daf-11ea-9618-0242ac110016 STEP: Creating configMap with name cm-test-opt-upd-b2ae22a4-9daf-11ea-9618-0242ac110016 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b2ae21ec-9daf-11ea-9618-0242ac110016 STEP: Updating configmap cm-test-opt-upd-b2ae22a4-9daf-11ea-9618-0242ac110016 STEP: Creating configMap with name cm-test-opt-create-b2ae22d1-9daf-11ea-9618-0242ac110016 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:15:51.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8j679" for this suite. May 24 11:16:15.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:16:15.706: INFO: namespace: e2e-tests-projected-8j679, resource: bindings, ignored listing per whitelist May 24 11:16:15.746: INFO: namespace e2e-tests-projected-8j679 deletion completed in 24.101349532s • [SLOW TEST:122.776 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:16:15.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-85g8b STEP: creating a selector STEP: Creating the service pods in kubernetes May 24 11:16:15.893: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 24 11:16:42.042: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.50 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-85g8b PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:16:42.042: INFO: >>> kubeConfig: /root/.kube/config I0524 11:16:42.078864 6 log.go:172] (0xc0008be580) (0xc00223d900) Create stream I0524 11:16:42.078897 6 log.go:172] (0xc0008be580) (0xc00223d900) Stream added, broadcasting: 1 I0524 11:16:42.081537 6 log.go:172] (0xc0008be580) Reply frame received for 1 I0524 11:16:42.081578 6 log.go:172] (0xc0008be580) (0xc001c0c5a0) Create stream I0524 11:16:42.081591 6 log.go:172] (0xc0008be580) (0xc001c0c5a0) Stream added, broadcasting: 3 I0524 11:16:42.082689 6 log.go:172] (0xc0008be580) Reply frame received for 3 I0524 11:16:42.082739 6 log.go:172] (0xc0008be580) (0xc0016dc000) Create stream I0524 11:16:42.082752 6 log.go:172] (0xc0008be580) (0xc0016dc000) Stream added, broadcasting: 5 I0524 11:16:42.083714 6 log.go:172] (0xc0008be580) Reply frame received for 5 I0524 11:16:43.184181 6 log.go:172] (0xc0008be580) Data frame received for 3 I0524 11:16:43.184234 6 log.go:172] (0xc001c0c5a0) (3) Data frame handling I0524 11:16:43.184278 6 log.go:172] (0xc001c0c5a0) (3) Data frame sent I0524 11:16:43.184771 6 log.go:172] (0xc0008be580) Data frame received for 5 I0524 11:16:43.184852 6 log.go:172] (0xc0016dc000) (5) Data frame handling I0524 11:16:43.184952 6 log.go:172] (0xc0008be580) Data frame received for 3 I0524 11:16:43.184989 6 log.go:172] (0xc001c0c5a0) (3) Data frame handling I0524 11:16:43.187405 6 log.go:172] (0xc0008be580) Data frame received for 1 I0524 11:16:43.187443 6 log.go:172] (0xc00223d900) (1) Data frame handling I0524 11:16:43.187465 6 log.go:172] (0xc00223d900) (1) Data frame sent I0524 11:16:43.187496 6 log.go:172] (0xc0008be580) (0xc00223d900) Stream removed, broadcasting: 1 I0524 11:16:43.187554 6 log.go:172] (0xc0008be580) Go away received I0524 11:16:43.187682 6 log.go:172] (0xc0008be580) (0xc00223d900) Stream removed, broadcasting: 1 I0524 11:16:43.187758 6 log.go:172] (0xc0008be580) (0xc001c0c5a0) Stream removed, broadcasting: 3 I0524 11:16:43.187791 6 log.go:172] (0xc0008be580) (0xc0016dc000) Stream removed, broadcasting: 5 May 24 11:16:43.187: INFO: Found all expected endpoints: [netserver-0] May 24 11:16:43.191: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.54 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-85g8b PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:16:43.191: INFO: >>> kubeConfig: /root/.kube/config I0524 11:16:43.225052 6 log.go:172] (0xc0006f3550) (0xc001c0c780) Create stream I0524 11:16:43.225082 6 log.go:172] (0xc0006f3550) (0xc001c0c780) Stream added, broadcasting: 1 I0524 11:16:43.227623 6 log.go:172] (0xc0006f3550) Reply frame received for 1 I0524 11:16:43.227675 6 log.go:172] (0xc0006f3550) (0xc00223dae0) Create stream I0524 11:16:43.227696 6 log.go:172] (0xc0006f3550) (0xc00223dae0) Stream added, broadcasting: 3 I0524 11:16:43.228618 6 log.go:172] (0xc0006f3550) Reply frame received for 3 I0524 11:16:43.228662 6 log.go:172] (0xc0006f3550) (0xc001df6820) Create stream I0524 11:16:43.228680 6 log.go:172] (0xc0006f3550) (0xc001df6820) Stream added, broadcasting: 5 I0524 11:16:43.229847 6 log.go:172] (0xc0006f3550) Reply frame received for 5 I0524 11:16:44.302890 6 log.go:172] (0xc0006f3550) Data frame received for 5 I0524 11:16:44.302957 6 log.go:172] (0xc001df6820) (5) Data frame handling I0524 11:16:44.303024 6 log.go:172] (0xc0006f3550) Data frame received for 3 I0524 11:16:44.303074 6 log.go:172] (0xc00223dae0) (3) Data frame handling I0524 11:16:44.303101 6 log.go:172] (0xc00223dae0) (3) Data frame sent I0524 11:16:44.303119 6 log.go:172] (0xc0006f3550) Data frame received for 3 I0524 11:16:44.303136 6 log.go:172] (0xc00223dae0) (3) Data frame handling I0524 11:16:44.306331 6 log.go:172] (0xc0006f3550) Data frame received for 1 I0524 11:16:44.306367 6 log.go:172] (0xc001c0c780) (1) Data frame handling I0524 11:16:44.306405 6 log.go:172] (0xc001c0c780) (1) Data frame sent I0524 11:16:44.306430 6 log.go:172] (0xc0006f3550) (0xc001c0c780) Stream removed, broadcasting: 1 I0524 11:16:44.306454 6 log.go:172] (0xc0006f3550) Go away received I0524 11:16:44.306604 6 log.go:172] (0xc0006f3550) (0xc001c0c780) Stream removed, broadcasting: 1 I0524 11:16:44.306630 6 log.go:172] (0xc0006f3550) (0xc00223dae0) Stream removed, broadcasting: 3 I0524 11:16:44.306645 6 log.go:172] (0xc0006f3550) (0xc001df6820) Stream removed, broadcasting: 5 May 24 11:16:44.306: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:16:44.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-85g8b" for this suite. May 24 11:17:08.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:17:08.392: INFO: namespace: e2e-tests-pod-network-test-85g8b, resource: bindings, ignored listing per whitelist May 24 11:17:08.444: INFO: namespace e2e-tests-pod-network-test-85g8b deletion completed in 24.13137647s • [SLOW TEST:52.697 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:17:08.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 24 11:17:08.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-b2n48' May 24 11:17:10.996: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 24 11:17:10.996: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 24 11:17:11.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-b2n48' May 24 11:17:11.154: INFO: stderr: "" May 24 11:17:11.154: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:17:11.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-b2n48" for this suite. May 24 11:17:33.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:17:33.197: INFO: namespace: e2e-tests-kubectl-b2n48, resource: bindings, ignored listing per whitelist May 24 11:17:33.257: INFO: namespace e2e-tests-kubectl-b2n48 deletion completed in 22.100642923s • [SLOW TEST:24.813 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:17:33.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 24 11:17:33.388: INFO: PodSpec: initContainers in spec.initContainers May 24 11:18:22.339: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2a128818-9db0-11ea-9618-0242ac110016", GenerateName:"", Namespace:"e2e-tests-init-container-9dn84", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-9dn84/pods/pod-init-2a128818-9db0-11ea-9618-0242ac110016", UID:"2a132378-9db0-11ea-99e8-0242ac110002", ResourceVersion:"12260724", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725915853, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"388401387"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mlhvc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001dde700), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mlhvc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mlhvc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mlhvc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001191ca8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ec31a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001191d30)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001191d50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001191d58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001191d5c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915853, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915853, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915853, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915853, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.52", StartTime:(*v1.Time)(0xc000ed11c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000ed1200), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000adf500)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://f231d8723cb830a087068a6ac71dec59d271100f1c0dc35788e3611da21458bd"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000ed1220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000ed11e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:18:22.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-9dn84" for this suite. May 24 11:18:44.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:18:44.430: INFO: namespace: e2e-tests-init-container-9dn84, resource: bindings, ignored listing per whitelist May 24 11:18:44.456: INFO: namespace e2e-tests-init-container-9dn84 deletion completed in 22.096129104s • [SLOW TEST:71.199 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:18:44.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 11:18:44.557: INFO: Waiting up to 5m0s for pod "downwardapi-volume-547d1075-9db0-11ea-9618-0242ac110016" in namespace "e2e-tests-downward-api-bs6xx" to be "success or failure" May 24 11:18:44.573: INFO: Pod "downwardapi-volume-547d1075-9db0-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 16.185217ms May 24 11:18:46.578: INFO: Pod "downwardapi-volume-547d1075-9db0-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020815675s May 24 11:18:48.582: INFO: Pod "downwardapi-volume-547d1075-9db0-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025420644s STEP: Saw pod success May 24 11:18:48.583: INFO: Pod "downwardapi-volume-547d1075-9db0-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:18:48.586: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-547d1075-9db0-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 11:18:48.604: INFO: Waiting for pod downwardapi-volume-547d1075-9db0-11ea-9618-0242ac110016 to disappear May 24 11:18:48.608: INFO: Pod downwardapi-volume-547d1075-9db0-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:18:48.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bs6xx" for this suite. May 24 11:18:54.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:18:54.706: INFO: namespace: e2e-tests-downward-api-bs6xx, resource: bindings, ignored listing per whitelist May 24 11:18:54.720: INFO: namespace e2e-tests-downward-api-bs6xx deletion completed in 6.104267371s • [SLOW TEST:10.264 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:18:54.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 24 11:18:58.898: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-5a9e62cd-9db0-11ea-9618-0242ac110016", GenerateName:"", Namespace:"e2e-tests-pods-lqtmj", SelfLink:"/api/v1/namespaces/e2e-tests-pods-lqtmj/pods/pod-submit-remove-5a9e62cd-9db0-11ea-9618-0242ac110016", UID:"5a9ff80d-9db0-11ea-99e8-0242ac110002", ResourceVersion:"12260845", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725915934, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"835585395", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6f22x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0010d7200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6f22x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002467788), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001a8c000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024677d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024677f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0024677f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0024677fc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915934, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915937, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915937, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725915934, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.56", StartTime:(*v1.Time)(0xc0023cc6a0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0023cc6c0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://4822532456bb33f53403c3e376c4b2e21357b3432317361b8ad86f7dafc7c301"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 24 11:19:03.916: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:19:03.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-lqtmj" for this suite. May 24 11:19:09.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:19:09.993: INFO: namespace: e2e-tests-pods-lqtmj, resource: bindings, ignored listing per whitelist May 24 11:19:10.035: INFO: namespace e2e-tests-pods-lqtmj deletion completed in 6.11177505s • [SLOW TEST:15.315 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:19:10.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 24 11:19:10.217: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-6xggt,SelfLink:/api/v1/namespaces/e2e-tests-watch-6xggt/configmaps/e2e-watch-test-resource-version,UID:63bca093-9db0-11ea-99e8-0242ac110002,ResourceVersion:12260892,Generation:0,CreationTimestamp:2020-05-24 11:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 24 11:19:10.218: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-6xggt,SelfLink:/api/v1/namespaces/e2e-tests-watch-6xggt/configmaps/e2e-watch-test-resource-version,UID:63bca093-9db0-11ea-99e8-0242ac110002,ResourceVersion:12260893,Generation:0,CreationTimestamp:2020-05-24 11:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:19:10.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-6xggt" for this suite. May 24 11:19:16.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:19:16.308: INFO: namespace: e2e-tests-watch-6xggt, resource: bindings, ignored listing per whitelist May 24 11:19:16.321: INFO: namespace e2e-tests-watch-6xggt deletion completed in 6.097908806s • [SLOW TEST:6.285 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:19:16.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 24 11:19:23.388: INFO: 9 pods remaining May 24 11:19:23.388: INFO: 0 pods has nil DeletionTimestamp May 24 11:19:23.388: INFO: May 24 11:19:24.018: INFO: 0 pods remaining May 24 11:19:24.018: INFO: 0 pods has nil DeletionTimestamp May 24 11:19:24.018: INFO: STEP: Gathering metrics W0524 11:19:25.005383 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 11:19:25.005: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:19:25.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-8hd4w" for this suite. May 24 11:19:31.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:19:31.127: INFO: namespace: e2e-tests-gc-8hd4w, resource: bindings, ignored listing per whitelist May 24 11:19:31.155: INFO: namespace e2e-tests-gc-8hd4w deletion completed in 6.147135474s • [SLOW TEST:14.834 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:19:31.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-nc7vw [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-nc7vw STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-nc7vw May 24 11:19:31.304: INFO: Found 0 stateful pods, waiting for 1 May 24 11:19:41.309: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 24 11:19:41.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 24 11:19:41.600: INFO: stderr: "I0524 11:19:41.443309 1962 log.go:172] (0xc000138840) (0xc000517360) Create stream\nI0524 11:19:41.443364 1962 log.go:172] (0xc000138840) (0xc000517360) Stream added, broadcasting: 1\nI0524 11:19:41.445097 1962 log.go:172] (0xc000138840) Reply frame received for 1\nI0524 11:19:41.445253 1962 log.go:172] (0xc000138840) (0xc00062a000) Create stream\nI0524 11:19:41.445266 1962 log.go:172] (0xc000138840) (0xc00062a000) Stream added, broadcasting: 3\nI0524 11:19:41.446099 1962 log.go:172] (0xc000138840) Reply frame received for 3\nI0524 11:19:41.446120 1962 log.go:172] (0xc000138840) (0xc00062a0a0) Create stream\nI0524 11:19:41.446126 1962 log.go:172] (0xc000138840) (0xc00062a0a0) Stream added, broadcasting: 5\nI0524 11:19:41.446882 1962 log.go:172] (0xc000138840) Reply frame received for 5\nI0524 11:19:41.594127 1962 log.go:172] (0xc000138840) Data frame received for 5\nI0524 11:19:41.594254 1962 log.go:172] (0xc000138840) Data frame received for 3\nI0524 11:19:41.594306 1962 log.go:172] (0xc00062a000) (3) Data frame handling\nI0524 11:19:41.594337 1962 log.go:172] (0xc00062a000) (3) Data frame sent\nI0524 11:19:41.594351 1962 log.go:172] (0xc000138840) Data frame received for 3\nI0524 11:19:41.594359 1962 log.go:172] (0xc00062a000) (3) Data frame handling\nI0524 11:19:41.594373 1962 log.go:172] (0xc00062a0a0) (5) Data frame handling\nI0524 11:19:41.596190 1962 log.go:172] (0xc000138840) Data frame received for 1\nI0524 11:19:41.596218 1962 log.go:172] (0xc000517360) (1) Data frame handling\nI0524 11:19:41.596271 1962 log.go:172] (0xc000517360) (1) Data frame sent\nI0524 11:19:41.596308 1962 log.go:172] (0xc000138840) (0xc000517360) Stream removed, broadcasting: 1\nI0524 11:19:41.596386 1962 log.go:172] (0xc000138840) Go away received\nI0524 11:19:41.596557 1962 log.go:172] (0xc000138840) (0xc000517360) Stream removed, broadcasting: 1\nI0524 11:19:41.596586 1962 log.go:172] (0xc000138840) (0xc00062a000) Stream removed, broadcasting: 3\nI0524 11:19:41.596601 1962 log.go:172] (0xc000138840) (0xc00062a0a0) Stream removed, broadcasting: 5\n" May 24 11:19:41.600: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 24 11:19:41.600: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 24 11:19:41.605: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 24 11:19:51.610: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 11:19:51.610: INFO: Waiting for statefulset status.replicas updated to 0 May 24 11:19:51.628: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999545s May 24 11:19:52.631: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993701986s May 24 11:19:53.636: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990113552s May 24 11:19:54.641: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.985328377s May 24 11:19:55.646: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.980229155s May 24 11:19:56.651: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.975089389s May 24 11:19:57.656: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.97034415s May 24 11:19:58.661: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.964960674s May 24 11:19:59.666: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.960629848s May 24 11:20:00.671: INFO: Verifying statefulset ss doesn't scale past 1 for another 955.612697ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-nc7vw May 24 11:20:01.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:20:01.920: INFO: stderr: "I0524 11:20:01.805811 1985 log.go:172] (0xc00013a630) (0xc000682640) Create stream\nI0524 11:20:01.805877 1985 log.go:172] (0xc00013a630) (0xc000682640) Stream added, broadcasting: 1\nI0524 11:20:01.807829 1985 log.go:172] (0xc00013a630) Reply frame received for 1\nI0524 11:20:01.807885 1985 log.go:172] (0xc00013a630) (0xc0003ecbe0) Create stream\nI0524 11:20:01.807906 1985 log.go:172] (0xc00013a630) (0xc0003ecbe0) Stream added, broadcasting: 3\nI0524 11:20:01.808735 1985 log.go:172] (0xc00013a630) Reply frame received for 3\nI0524 11:20:01.808764 1985 log.go:172] (0xc00013a630) (0xc000022000) Create stream\nI0524 11:20:01.808773 1985 log.go:172] (0xc00013a630) (0xc000022000) Stream added, broadcasting: 5\nI0524 11:20:01.809626 1985 log.go:172] (0xc00013a630) Reply frame received for 5\nI0524 11:20:01.914145 1985 log.go:172] (0xc00013a630) Data frame received for 5\nI0524 11:20:01.914191 1985 log.go:172] (0xc000022000) (5) Data frame handling\nI0524 11:20:01.914220 1985 log.go:172] (0xc00013a630) Data frame received for 3\nI0524 11:20:01.914238 1985 log.go:172] (0xc0003ecbe0) (3) Data frame handling\nI0524 11:20:01.914270 1985 log.go:172] (0xc0003ecbe0) (3) Data frame sent\nI0524 11:20:01.914288 1985 log.go:172] (0xc00013a630) Data frame received for 3\nI0524 11:20:01.914316 1985 log.go:172] (0xc0003ecbe0) (3) Data frame handling\nI0524 11:20:01.915744 1985 log.go:172] (0xc00013a630) Data frame received for 1\nI0524 11:20:01.915781 1985 log.go:172] (0xc000682640) (1) Data frame handling\nI0524 11:20:01.915800 1985 log.go:172] (0xc000682640) (1) Data frame sent\nI0524 11:20:01.915819 1985 log.go:172] (0xc00013a630) (0xc000682640) Stream removed, broadcasting: 1\nI0524 11:20:01.915845 1985 log.go:172] (0xc00013a630) Go away received\nI0524 11:20:01.916000 1985 log.go:172] (0xc00013a630) (0xc000682640) Stream removed, broadcasting: 1\nI0524 11:20:01.916019 1985 log.go:172] (0xc00013a630) (0xc0003ecbe0) Stream removed, broadcasting: 3\nI0524 11:20:01.916036 1985 log.go:172] (0xc00013a630) (0xc000022000) Stream removed, broadcasting: 5\n" May 24 11:20:01.920: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 24 11:20:01.920: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 24 11:20:01.923: INFO: Found 1 stateful pods, waiting for 3 May 24 11:20:11.928: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 24 11:20:11.928: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 24 11:20:11.928: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 24 11:20:11.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 24 11:20:12.160: INFO: stderr: "I0524 11:20:12.066954 2007 log.go:172] (0xc000138630) (0xc000724640) Create stream\nI0524 11:20:12.067042 2007 log.go:172] (0xc000138630) (0xc000724640) Stream added, broadcasting: 1\nI0524 11:20:12.069580 2007 log.go:172] (0xc000138630) Reply frame received for 1\nI0524 11:20:12.069623 2007 log.go:172] (0xc000138630) (0xc000672d20) Create stream\nI0524 11:20:12.069639 2007 log.go:172] (0xc000138630) (0xc000672d20) Stream added, broadcasting: 3\nI0524 11:20:12.070601 2007 log.go:172] (0xc000138630) Reply frame received for 3\nI0524 11:20:12.070657 2007 log.go:172] (0xc000138630) (0xc00069c000) Create stream\nI0524 11:20:12.070673 2007 log.go:172] (0xc000138630) (0xc00069c000) Stream added, broadcasting: 5\nI0524 11:20:12.071558 2007 log.go:172] (0xc000138630) Reply frame received for 5\nI0524 11:20:12.154352 2007 log.go:172] (0xc000138630) Data frame received for 5\nI0524 11:20:12.154384 2007 log.go:172] (0xc00069c000) (5) Data frame handling\nI0524 11:20:12.154432 2007 log.go:172] (0xc000138630) Data frame received for 3\nI0524 11:20:12.154460 2007 log.go:172] (0xc000672d20) (3) Data frame handling\nI0524 11:20:12.154472 2007 log.go:172] (0xc000672d20) (3) Data frame sent\nI0524 11:20:12.154477 2007 log.go:172] (0xc000138630) Data frame received for 3\nI0524 11:20:12.154481 2007 log.go:172] (0xc000672d20) (3) Data frame handling\nI0524 11:20:12.155967 2007 log.go:172] (0xc000138630) Data frame received for 1\nI0524 11:20:12.155984 2007 log.go:172] (0xc000724640) (1) Data frame handling\nI0524 11:20:12.155995 2007 log.go:172] (0xc000724640) (1) Data frame sent\nI0524 11:20:12.156095 2007 log.go:172] (0xc000138630) (0xc000724640) Stream removed, broadcasting: 1\nI0524 11:20:12.156269 2007 log.go:172] (0xc000138630) Go away received\nI0524 11:20:12.156324 2007 log.go:172] (0xc000138630) (0xc000724640) Stream removed, broadcasting: 1\nI0524 11:20:12.156361 2007 log.go:172] (0xc000138630) (0xc000672d20) Stream removed, broadcasting: 3\nI0524 11:20:12.156388 2007 log.go:172] (0xc000138630) (0xc00069c000) Stream removed, broadcasting: 5\n" May 24 11:20:12.160: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 24 11:20:12.160: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 24 11:20:12.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 24 11:20:12.427: INFO: stderr: "I0524 11:20:12.290520 2030 log.go:172] (0xc00014c840) (0xc00062b4a0) Create stream\nI0524 11:20:12.290584 2030 log.go:172] (0xc00014c840) (0xc00062b4a0) Stream added, broadcasting: 1\nI0524 11:20:12.295557 2030 log.go:172] (0xc00014c840) Reply frame received for 1\nI0524 11:20:12.295597 2030 log.go:172] (0xc00014c840) (0xc000782000) Create stream\nI0524 11:20:12.295606 2030 log.go:172] (0xc00014c840) (0xc000782000) Stream added, broadcasting: 3\nI0524 11:20:12.296855 2030 log.go:172] (0xc00014c840) Reply frame received for 3\nI0524 11:20:12.296883 2030 log.go:172] (0xc00014c840) (0xc0006c6000) Create stream\nI0524 11:20:12.296893 2030 log.go:172] (0xc00014c840) (0xc0006c6000) Stream added, broadcasting: 5\nI0524 11:20:12.297946 2030 log.go:172] (0xc00014c840) Reply frame received for 5\nI0524 11:20:12.418627 2030 log.go:172] (0xc00014c840) Data frame received for 3\nI0524 11:20:12.418763 2030 log.go:172] (0xc000782000) (3) Data frame handling\nI0524 11:20:12.418798 2030 log.go:172] (0xc000782000) (3) Data frame sent\nI0524 11:20:12.418903 2030 log.go:172] (0xc00014c840) Data frame received for 5\nI0524 11:20:12.418925 2030 log.go:172] (0xc0006c6000) (5) Data frame handling\nI0524 11:20:12.418967 2030 log.go:172] (0xc00014c840) Data frame received for 3\nI0524 11:20:12.418993 2030 log.go:172] (0xc000782000) (3) Data frame handling\nI0524 11:20:12.422506 2030 log.go:172] (0xc00014c840) Data frame received for 1\nI0524 11:20:12.422543 2030 log.go:172] (0xc00062b4a0) (1) Data frame handling\nI0524 11:20:12.422564 2030 log.go:172] (0xc00062b4a0) (1) Data frame sent\nI0524 11:20:12.422596 2030 log.go:172] (0xc00014c840) (0xc00062b4a0) Stream removed, broadcasting: 1\nI0524 11:20:12.422637 2030 log.go:172] (0xc00014c840) Go away received\nI0524 11:20:12.422855 2030 log.go:172] (0xc00014c840) (0xc00062b4a0) Stream removed, broadcasting: 1\nI0524 11:20:12.422875 2030 log.go:172] (0xc00014c840) (0xc000782000) Stream removed, broadcasting: 3\nI0524 11:20:12.422886 2030 log.go:172] (0xc00014c840) (0xc0006c6000) Stream removed, broadcasting: 5\n" May 24 11:20:12.427: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 24 11:20:12.427: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 24 11:20:12.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 24 11:20:12.677: INFO: stderr: "I0524 11:20:12.545683 2053 log.go:172] (0xc000138790) (0xc0006174a0) Create stream\nI0524 11:20:12.545767 2053 log.go:172] (0xc000138790) (0xc0006174a0) Stream added, broadcasting: 1\nI0524 11:20:12.548137 2053 log.go:172] (0xc000138790) Reply frame received for 1\nI0524 11:20:12.548171 2053 log.go:172] (0xc000138790) (0xc00026e000) Create stream\nI0524 11:20:12.548179 2053 log.go:172] (0xc000138790) (0xc00026e000) Stream added, broadcasting: 3\nI0524 11:20:12.549079 2053 log.go:172] (0xc000138790) Reply frame received for 3\nI0524 11:20:12.549263 2053 log.go:172] (0xc000138790) (0xc000278000) Create stream\nI0524 11:20:12.549282 2053 log.go:172] (0xc000138790) (0xc000278000) Stream added, broadcasting: 5\nI0524 11:20:12.550167 2053 log.go:172] (0xc000138790) Reply frame received for 5\nI0524 11:20:12.669516 2053 log.go:172] (0xc000138790) Data frame received for 3\nI0524 11:20:12.669569 2053 log.go:172] (0xc00026e000) (3) Data frame handling\nI0524 11:20:12.669658 2053 log.go:172] (0xc00026e000) (3) Data frame sent\nI0524 11:20:12.669680 2053 log.go:172] (0xc000138790) Data frame received for 3\nI0524 11:20:12.669697 2053 log.go:172] (0xc00026e000) (3) Data frame handling\nI0524 11:20:12.669942 2053 log.go:172] (0xc000138790) Data frame received for 5\nI0524 11:20:12.670039 2053 log.go:172] (0xc000278000) (5) Data frame handling\nI0524 11:20:12.672425 2053 log.go:172] (0xc000138790) Data frame received for 1\nI0524 11:20:12.672466 2053 log.go:172] (0xc0006174a0) (1) Data frame handling\nI0524 11:20:12.672489 2053 log.go:172] (0xc0006174a0) (1) Data frame sent\nI0524 11:20:12.672513 2053 log.go:172] (0xc000138790) (0xc0006174a0) Stream removed, broadcasting: 1\nI0524 11:20:12.672543 2053 log.go:172] (0xc000138790) Go away received\nI0524 11:20:12.672831 2053 log.go:172] (0xc000138790) (0xc0006174a0) Stream removed, broadcasting: 1\nI0524 11:20:12.672870 2053 log.go:172] (0xc000138790) (0xc00026e000) Stream removed, broadcasting: 3\nI0524 11:20:12.672885 2053 log.go:172] (0xc000138790) (0xc000278000) Stream removed, broadcasting: 5\n" May 24 11:20:12.678: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 24 11:20:12.678: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 24 11:20:12.678: INFO: Waiting for statefulset status.replicas updated to 0 May 24 11:20:12.680: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 24 11:20:22.687: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 11:20:22.687: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 24 11:20:22.687: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 24 11:20:22.777: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999725s May 24 11:20:23.783: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.914995763s May 24 11:20:24.787: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.909446578s May 24 11:20:25.792: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.904795532s May 24 11:20:26.798: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.900096987s May 24 11:20:27.803: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.893963925s May 24 11:20:28.808: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.888782148s May 24 11:20:29.814: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.883829715s May 24 11:20:30.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.878073582s May 24 11:20:31.825: INFO: Verifying statefulset ss doesn't scale past 3 for another 872.371705ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-nc7vw May 24 11:20:32.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:20:33.042: INFO: stderr: "I0524 11:20:32.946207 2076 log.go:172] (0xc00082a2c0) (0xc000738640) Create stream\nI0524 11:20:32.946260 2076 log.go:172] (0xc00082a2c0) (0xc000738640) Stream added, broadcasting: 1\nI0524 11:20:32.948367 2076 log.go:172] (0xc00082a2c0) Reply frame received for 1\nI0524 11:20:32.948415 2076 log.go:172] (0xc00082a2c0) (0xc000668c80) Create stream\nI0524 11:20:32.948437 2076 log.go:172] (0xc00082a2c0) (0xc000668c80) Stream added, broadcasting: 3\nI0524 11:20:32.949792 2076 log.go:172] (0xc00082a2c0) Reply frame received for 3\nI0524 11:20:32.949830 2076 log.go:172] (0xc00082a2c0) (0xc0007386e0) Create stream\nI0524 11:20:32.949843 2076 log.go:172] (0xc00082a2c0) (0xc0007386e0) Stream added, broadcasting: 5\nI0524 11:20:32.950899 2076 log.go:172] (0xc00082a2c0) Reply frame received for 5\nI0524 11:20:33.036727 2076 log.go:172] (0xc00082a2c0) Data frame received for 5\nI0524 11:20:33.036786 2076 log.go:172] (0xc0007386e0) (5) Data frame handling\nI0524 11:20:33.036819 2076 log.go:172] (0xc00082a2c0) Data frame received for 3\nI0524 11:20:33.036841 2076 log.go:172] (0xc000668c80) (3) Data frame handling\nI0524 11:20:33.036924 2076 log.go:172] (0xc000668c80) (3) Data frame sent\nI0524 11:20:33.036944 2076 log.go:172] (0xc00082a2c0) Data frame received for 3\nI0524 11:20:33.036956 2076 log.go:172] (0xc000668c80) (3) Data frame handling\nI0524 11:20:33.038042 2076 log.go:172] (0xc00082a2c0) Data frame received for 1\nI0524 11:20:33.038090 2076 log.go:172] (0xc000738640) (1) Data frame handling\nI0524 11:20:33.038122 2076 log.go:172] (0xc000738640) (1) Data frame sent\nI0524 11:20:33.038146 2076 log.go:172] (0xc00082a2c0) (0xc000738640) Stream removed, broadcasting: 1\nI0524 11:20:33.038178 2076 log.go:172] (0xc00082a2c0) Go away received\nI0524 11:20:33.038388 2076 log.go:172] (0xc00082a2c0) (0xc000738640) Stream removed, broadcasting: 1\nI0524 11:20:33.038421 2076 log.go:172] (0xc00082a2c0) (0xc000668c80) Stream removed, broadcasting: 3\nI0524 11:20:33.038437 2076 log.go:172] (0xc00082a2c0) (0xc0007386e0) Stream removed, broadcasting: 5\n" May 24 11:20:33.043: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 24 11:20:33.043: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 24 11:20:33.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:20:33.259: INFO: stderr: "I0524 11:20:33.183872 2098 log.go:172] (0xc000160790) (0xc000583540) Create stream\nI0524 11:20:33.183923 2098 log.go:172] (0xc000160790) (0xc000583540) Stream added, broadcasting: 1\nI0524 11:20:33.186478 2098 log.go:172] (0xc000160790) Reply frame received for 1\nI0524 11:20:33.186524 2098 log.go:172] (0xc000160790) (0xc0004fc000) Create stream\nI0524 11:20:33.186538 2098 log.go:172] (0xc000160790) (0xc0004fc000) Stream added, broadcasting: 3\nI0524 11:20:33.188334 2098 log.go:172] (0xc000160790) Reply frame received for 3\nI0524 11:20:33.188394 2098 log.go:172] (0xc000160790) (0xc000320000) Create stream\nI0524 11:20:33.188416 2098 log.go:172] (0xc000160790) (0xc000320000) Stream added, broadcasting: 5\nI0524 11:20:33.189811 2098 log.go:172] (0xc000160790) Reply frame received for 5\nI0524 11:20:33.254508 2098 log.go:172] (0xc000160790) Data frame received for 5\nI0524 11:20:33.254555 2098 log.go:172] (0xc000320000) (5) Data frame handling\nI0524 11:20:33.254580 2098 log.go:172] (0xc000160790) Data frame received for 3\nI0524 11:20:33.254589 2098 log.go:172] (0xc0004fc000) (3) Data frame handling\nI0524 11:20:33.254603 2098 log.go:172] (0xc0004fc000) (3) Data frame sent\nI0524 11:20:33.254628 2098 log.go:172] (0xc000160790) Data frame received for 3\nI0524 11:20:33.254647 2098 log.go:172] (0xc0004fc000) (3) Data frame handling\nI0524 11:20:33.255637 2098 log.go:172] (0xc000160790) Data frame received for 1\nI0524 11:20:33.255661 2098 log.go:172] (0xc000583540) (1) Data frame handling\nI0524 11:20:33.255677 2098 log.go:172] (0xc000583540) (1) Data frame sent\nI0524 11:20:33.255692 2098 log.go:172] (0xc000160790) (0xc000583540) Stream removed, broadcasting: 1\nI0524 11:20:33.255706 2098 log.go:172] (0xc000160790) Go away received\nI0524 11:20:33.255892 2098 log.go:172] (0xc000160790) (0xc000583540) Stream removed, broadcasting: 1\nI0524 11:20:33.255912 2098 log.go:172] (0xc000160790) (0xc0004fc000) Stream removed, broadcasting: 3\nI0524 11:20:33.255917 2098 log.go:172] (0xc000160790) (0xc000320000) Stream removed, broadcasting: 5\n" May 24 11:20:33.259: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 24 11:20:33.259: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 24 11:20:33.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:20:33.446: INFO: rc: 1 May 24 11:20:33.446: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] I0524 11:20:33.389620 2120 log.go:172] (0xc00015c840) (0xc0007746e0) Create stream I0524 11:20:33.389664 2120 log.go:172] (0xc00015c840) (0xc0007746e0) Stream added, broadcasting: 1 I0524 11:20:33.391697 2120 log.go:172] (0xc00015c840) Reply frame received for 1 I0524 11:20:33.391740 2120 log.go:172] (0xc00015c840) (0xc00035cd20) Create stream I0524 11:20:33.391760 2120 log.go:172] (0xc00015c840) (0xc00035cd20) Stream added, broadcasting: 3 I0524 11:20:33.392410 2120 log.go:172] (0xc00015c840) Reply frame received for 3 I0524 11:20:33.392439 2120 log.go:172] (0xc00015c840) (0xc000100000) Create stream I0524 11:20:33.392449 2120 log.go:172] (0xc00015c840) (0xc000100000) Stream added, broadcasting: 5 I0524 11:20:33.393061 2120 log.go:172] (0xc00015c840) Reply frame received for 5 I0524 11:20:33.443512 2120 log.go:172] (0xc00015c840) Data frame received for 1 I0524 11:20:33.443540 2120 log.go:172] (0xc0007746e0) (1) Data frame handling I0524 11:20:33.443552 2120 log.go:172] (0xc0007746e0) (1) Data frame sent I0524 11:20:33.443564 2120 log.go:172] (0xc00015c840) (0xc0007746e0) Stream removed, broadcasting: 1 I0524 11:20:33.443605 2120 log.go:172] (0xc00015c840) (0xc00035cd20) Stream removed, broadcasting: 3 I0524 11:20:33.443728 2120 log.go:172] (0xc00015c840) (0xc000100000) Stream removed, broadcasting: 5 I0524 11:20:33.443778 2120 log.go:172] (0xc00015c840) Go away received I0524 11:20:33.443814 2120 log.go:172] (0xc00015c840) (0xc0007746e0) Stream removed, broadcasting: 1 I0524 11:20:33.443838 2120 log.go:172] (0xc00015c840) (0xc00035cd20) Stream removed, broadcasting: 3 I0524 11:20:33.443847 2120 log.go:172] (0xc00015c840) (0xc000100000) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "a6187a42e559f5baf47bab61a216e85e77de7594c9b27cb192455834ad5ad857": OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "process_linux.go:101: executing setns process caused \"exit status 1\"": unknown [] 0xc001f71ce0 exit status 1 true [0xc000a7f860 0xc000a7f878 0xc000a7f890] [0xc000a7f860 0xc000a7f878 0xc000a7f890] [0xc000a7f870 0xc000a7f888] [0x935700 0x935700] 0xc0013b61e0 }: Command stdout: stderr: I0524 11:20:33.389620 2120 log.go:172] (0xc00015c840) (0xc0007746e0) Create stream I0524 11:20:33.389664 2120 log.go:172] (0xc00015c840) (0xc0007746e0) Stream added, broadcasting: 1 I0524 11:20:33.391697 2120 log.go:172] (0xc00015c840) Reply frame received for 1 I0524 11:20:33.391740 2120 log.go:172] (0xc00015c840) (0xc00035cd20) Create stream I0524 11:20:33.391760 2120 log.go:172] (0xc00015c840) (0xc00035cd20) Stream added, broadcasting: 3 I0524 11:20:33.392410 2120 log.go:172] (0xc00015c840) Reply frame received for 3 I0524 11:20:33.392439 2120 log.go:172] (0xc00015c840) (0xc000100000) Create stream I0524 11:20:33.392449 2120 log.go:172] (0xc00015c840) (0xc000100000) Stream added, broadcasting: 5 I0524 11:20:33.393061 2120 log.go:172] (0xc00015c840) Reply frame received for 5 I0524 11:20:33.443512 2120 log.go:172] (0xc00015c840) Data frame received for 1 I0524 11:20:33.443540 2120 log.go:172] (0xc0007746e0) (1) Data frame handling I0524 11:20:33.443552 2120 log.go:172] (0xc0007746e0) (1) Data frame sent I0524 11:20:33.443564 2120 log.go:172] (0xc00015c840) (0xc0007746e0) Stream removed, broadcasting: 1 I0524 11:20:33.443605 2120 log.go:172] (0xc00015c840) (0xc00035cd20) Stream removed, broadcasting: 3 I0524 11:20:33.443728 2120 log.go:172] (0xc00015c840) (0xc000100000) Stream removed, broadcasting: 5 I0524 11:20:33.443778 2120 log.go:172] (0xc00015c840) Go away received I0524 11:20:33.443814 2120 log.go:172] (0xc00015c840) (0xc0007746e0) Stream removed, broadcasting: 1 I0524 11:20:33.443838 2120 log.go:172] (0xc00015c840) (0xc00035cd20) Stream removed, broadcasting: 3 I0524 11:20:33.443847 2120 log.go:172] (0xc00015c840) (0xc000100000) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "a6187a42e559f5baf47bab61a216e85e77de7594c9b27cb192455834ad5ad857": OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "process_linux.go:101: executing setns process caused \"exit status 1\"": unknown error: exit status 1 May 24 11:20:43.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:20:43.531: INFO: rc: 1 May 24 11:20:43.531: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00163f110 exit status 1 true [0xc00200ccb8 0xc00200ccd0 0xc00200cce8] [0xc00200ccb8 0xc00200ccd0 0xc00200cce8] [0xc00200ccc8 0xc00200cce0] [0x935700 0x935700] 0xc00240d8c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:20:53.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:20:53.621: INFO: rc: 1 May 24 11:20:53.621: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023c0ab0 exit status 1 true [0xc00041cd78 0xc00041cd90 0xc00041cda8] [0xc00041cd78 0xc00041cd90 0xc00041cda8] [0xc00041cd88 0xc00041cda0] [0x935700 0x935700] 0xc0023e68a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:21:03.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:21:03.726: INFO: rc: 1 May 24 11:21:03.726: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023c0bd0 exit status 1 true [0xc00041cdb0 0xc00041cdc8 0xc00041cde0] [0xc00041cdb0 0xc00041cdc8 0xc00041cde0] [0xc00041cdc0 0xc00041cdd8] [0x935700 0x935700] 0xc0023e73e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:21:13.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:21:13.822: INFO: rc: 1 May 24 11:21:13.822: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002324120 exit status 1 true [0xc0015fc000 0xc0015fc018 0xc0015fc048] [0xc0015fc000 0xc0015fc018 0xc0015fc048] [0xc0015fc010 0xc0015fc028] [0x935700 0x935700] 0xc0025d2a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:21:23.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:21:23.921: INFO: rc: 1 May 24 11:21:23.922: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002324240 exit status 1 true [0xc0015fc050 0xc0015fc068 0xc0015fc080] [0xc0015fc050 0xc0015fc068 0xc0015fc080] [0xc0015fc060 0xc0015fc078] [0x935700 0x935700] 0xc0025d2d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:21:33.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:21:34.022: INFO: rc: 1 May 24 11:21:34.023: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002324390 exit status 1 true [0xc0015fc088 0xc0015fc0a0 0xc0015fc0b8] [0xc0015fc088 0xc0015fc0a0 0xc0015fc0b8] [0xc0015fc098 0xc0015fc0b0] [0x935700 0x935700] 0xc0025261e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:21:44.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:21:44.112: INFO: rc: 1 May 24 11:21:44.113: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002324870 exit status 1 true [0xc0015fc0c0 0xc0015fc0d8 0xc0015fc0f0] [0xc0015fc0c0 0xc0015fc0d8 0xc0015fc0f0] [0xc0015fc0d0 0xc0015fc0e8] [0x935700 0x935700] 0xc002527920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:21:54.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:21:54.207: INFO: rc: 1 May 24 11:21:54.207: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0021fa150 exit status 1 true [0xc00000e100 0xc00000e1a8 0xc00000e1e8] [0xc00000e100 0xc00000e1a8 0xc00000e1e8] [0xc00000e198 0xc00000e1c8] [0x935700 0x935700] 0xc00219e1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:22:04.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:22:04.303: INFO: rc: 1 May 24 11:22:04.303: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002206330 exit status 1 true [0xc000a7e008 0xc000a7e028 0xc000a7e080] [0xc000a7e008 0xc000a7e028 0xc000a7e080] [0xc000a7e020 0xc000a7e060] [0x935700 0x935700] 0xc0024bc1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:22:14.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:22:14.388: INFO: rc: 1 May 24 11:22:14.389: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023249c0 exit status 1 true [0xc0015fc0f8 0xc0015fc118 0xc0015fc130] [0xc0015fc0f8 0xc0015fc118 0xc0015fc130] [0xc0015fc110 0xc0015fc128] [0x935700 0x935700] 0xc002527bc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:22:24.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:22:24.486: INFO: rc: 1 May 24 11:22:24.486: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f8e180 exit status 1 true [0xc000444088 0xc0004441a8 0xc0004441d8] [0xc000444088 0xc0004441a8 0xc0004441d8] [0xc000444198 0xc0004441c8] [0x935700 0x935700] 0xc001d20300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:22:34.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:22:34.588: INFO: rc: 1 May 24 11:22:34.588: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0022064b0 exit status 1 true [0xc000a7e0b0 0xc000a7e128 0xc000a7e190] [0xc000a7e0b0 0xc000a7e128 0xc000a7e190] [0xc000a7e100 0xc000a7e180] [0x935700 0x935700] 0xc0024bc8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:22:44.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:22:44.687: INFO: rc: 1 May 24 11:22:44.687: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002324b10 exit status 1 true [0xc0015fc138 0xc0015fc150 0xc0015fc188] [0xc0015fc138 0xc0015fc150 0xc0015fc188] [0xc0015fc148 0xc0015fc178] [0x935700 0x935700] 0xc002527e60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:22:54.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:22:54.768: INFO: rc: 1 May 24 11:22:54.768: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002324c60 exit status 1 true [0xc0015fc1a0 0xc0015fc1c0 0xc0015fc220] [0xc0015fc1a0 0xc0015fc1c0 0xc0015fc220] [0xc0015fc1b0 0xc0015fc1f8] [0x935700 0x935700] 0xc001e56180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:23:04.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:23:04.860: INFO: rc: 1 May 24 11:23:04.860: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0021fa2a0 exit status 1 true [0xc00000e1f8 0xc00000e228 0xc00000e258] [0xc00000e1f8 0xc00000e228 0xc00000e258] [0xc00000e218 0xc00000e248] [0x935700 0x935700] 0xc00219e480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:23:14.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:23:14.958: INFO: rc: 1 May 24 11:23:14.958: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002324150 exit status 1 true [0xc0015fc000 0xc0015fc018 0xc0015fc048] [0xc0015fc000 0xc0015fc018 0xc0015fc048] [0xc0015fc010 0xc0015fc028] [0x935700 0x935700] 0xc0025261e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:23:24.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:23:25.051: INFO: rc: 1 May 24 11:23:25.051: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f8e1e0 exit status 1 true [0xc000444088 0xc0004441a8 0xc0004441d8] [0xc000444088 0xc0004441a8 0xc0004441d8] [0xc000444198 0xc0004441c8] [0x935700 0x935700] 0xc0025d2a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:23:35.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:23:35.133: INFO: rc: 1 May 24 11:23:35.133: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0021fa120 exit status 1 true [0xc00000e100 0xc00000e1a8 0xc00000e1e8] [0xc00000e100 0xc00000e1a8 0xc00000e1e8] [0xc00000e198 0xc00000e1c8] [0x935700 0x935700] 0xc001e56240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:23:45.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:23:45.228: INFO: rc: 1 May 24 11:23:45.228: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f8e330 exit status 1 true [0xc000444208 0xc0004442d0 0xc000444340] [0xc000444208 0xc0004442d0 0xc000444340] [0xc000444280 0xc000444330] [0x935700 0x935700] 0xc0025d2d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:23:55.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:23:55.331: INFO: rc: 1 May 24 11:23:55.331: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023242d0 exit status 1 true [0xc0015fc050 0xc0015fc068 0xc0015fc080] [0xc0015fc050 0xc0015fc068 0xc0015fc080] [0xc0015fc060 0xc0015fc078] [0x935700 0x935700] 0xc002527920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:24:05.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:24:05.427: INFO: rc: 1 May 24 11:24:05.427: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0021fa300 exit status 1 true [0xc00000e1f8 0xc00000e228 0xc00000e258] [0xc00000e1f8 0xc00000e228 0xc00000e258] [0xc00000e218 0xc00000e248] [0x935700 0x935700] 0xc001e56660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:24:15.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:24:15.526: INFO: rc: 1 May 24 11:24:15.526: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0021fa420 exit status 1 true [0xc00000e278 0xc00000e2a8 0xc00000e2d8] [0xc00000e278 0xc00000e2a8 0xc00000e2d8] [0xc00000e298 0xc00000e2c8] [0x935700 0x935700] 0xc001e569c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:24:25.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:24:25.610: INFO: rc: 1 May 24 11:24:25.610: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002206360 exit status 1 true [0xc000a7e008 0xc000a7e028 0xc000a7e080] [0xc000a7e008 0xc000a7e028 0xc000a7e080] [0xc000a7e020 0xc000a7e060] [0x935700 0x935700] 0xc001d20300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:24:35.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:24:35.696: INFO: rc: 1 May 24 11:24:35.696: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0022064e0 exit status 1 true [0xc000a7e0b0 0xc000a7e128 0xc000a7e190] [0xc000a7e0b0 0xc000a7e128 0xc000a7e190] [0xc000a7e100 0xc000a7e180] [0x935700 0x935700] 0xc001d20600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:24:45.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:24:45.787: INFO: rc: 1 May 24 11:24:45.787: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f8e480 exit status 1 true [0xc000444358 0xc000444370 0xc0004443e8] [0xc000444358 0xc000444370 0xc0004443e8] [0xc000444368 0xc0004443b8] [0x935700 0x935700] 0xc00219e1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:24:55.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:24:55.929: INFO: rc: 1 May 24 11:24:55.929: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0021fa5a0 exit status 1 true [0xc00000e2e8 0xc00000e318 0xc00000ebb8] [0xc00000e2e8 0xc00000e318 0xc00000ebb8] [0xc00000e308 0xc00000eba8] [0x935700 0x935700] 0xc001e57500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:25:05.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:25:06.016: INFO: rc: 1 May 24 11:25:06.016: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0021fa6c0 exit status 1 true [0xc00000ebc0 0xc00000ebd8 0xc00000ec00] [0xc00000ebc0 0xc00000ebd8 0xc00000ec00] [0xc00000ebd0 0xc00000ebf0] [0x935700 0x935700] 0xc0024bc1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:25:16.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:25:16.105: INFO: rc: 1 May 24 11:25:16.105: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002206660 exit status 1 true [0xc000a7e1a8 0xc000a7e1e8 0xc000a7e240] [0xc000a7e1a8 0xc000a7e1e8 0xc000a7e240] [0xc000a7e1d8 0xc000a7e208] [0x935700 0x935700] 0xc001d208a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:25:26.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:25:26.193: INFO: rc: 1 May 24 11:25:26.193: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f8e150 exit status 1 true [0xc00016e000 0xc00000e198 0xc00000e1c8] [0xc00016e000 0xc00000e198 0xc00000e1c8] [0xc00000e140 0xc00000e1b8] [0x935700 0x935700] 0xc0025d2a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 24 11:25:36.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nc7vw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:25:36.283: INFO: rc: 1 May 24 11:25:36.283: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: May 24 11:25:36.283: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 24 11:25:36.294: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nc7vw May 24 11:25:36.296: INFO: Scaling statefulset ss to 0 May 24 11:25:36.305: INFO: Waiting for statefulset status.replicas updated to 0 May 24 11:25:36.308: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:25:36.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-nc7vw" for this suite. May 24 11:25:42.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:25:42.445: INFO: namespace: e2e-tests-statefulset-nc7vw, resource: bindings, ignored listing per whitelist May 24 11:25:42.498: INFO: namespace e2e-tests-statefulset-nc7vw deletion completed in 6.173565049s • [SLOW TEST:371.343 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:25:42.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 24 11:25:42.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-92hbx' May 24 11:25:42.748: INFO: stderr: "" May 24 11:25:42.748: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 24 11:25:47.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-92hbx -o json' May 24 11:25:47.908: INFO: stderr: "" May 24 11:25:47.908: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-24T11:25:42Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-92hbx\",\n \"resourceVersion\": \"12261994\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-92hbx/pods/e2e-test-nginx-pod\",\n \"uid\": \"4dbe8cd4-9db1-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-zckht\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-zckht\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-zckht\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-24T11:25:42Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-24T11:25:45Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-24T11:25:45Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-24T11:25:42Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://36dac2d5dc6473f73450a9293766878e9e7f896745cfb9a9a2a0010964ccdf3c\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-24T11:25:45Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.63\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-24T11:25:42Z\"\n }\n}\n" STEP: replace the image in the pod May 24 11:25:47.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-92hbx' May 24 11:25:48.183: INFO: stderr: "" May 24 11:25:48.183: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 24 11:25:48.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-92hbx' May 24 11:25:51.851: INFO: stderr: "" May 24 11:25:51.851: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:25:51.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-92hbx" for this suite. May 24 11:25:57.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:25:57.925: INFO: namespace: e2e-tests-kubectl-92hbx, resource: bindings, ignored listing per whitelist May 24 11:25:57.952: INFO: namespace e2e-tests-kubectl-92hbx deletion completed in 6.096988673s • [SLOW TEST:15.454 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:25:57.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 24 11:26:02.604: INFO: Successfully updated pod "labelsupdate56df6357-9db1-11ea-9618-0242ac110016" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:26:06.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l5wcf" for this suite. May 24 11:26:28.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:26:28.687: INFO: namespace: e2e-tests-projected-l5wcf, resource: bindings, ignored listing per whitelist May 24 11:26:28.734: INFO: namespace e2e-tests-projected-l5wcf deletion completed in 22.092299362s • [SLOW TEST:30.782 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:26:28.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-xmlgx [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-xmlgx STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-xmlgx STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-xmlgx STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-xmlgx May 24 11:26:34.916: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-xmlgx, name: ss-0, uid: 6c111edd-9db1-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 24 11:26:34.938: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-xmlgx, name: ss-0, uid: 6c111edd-9db1-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 24 11:26:34.957: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-xmlgx, name: ss-0, uid: 6c111edd-9db1-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 24 11:26:34.975: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-xmlgx STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-xmlgx STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-xmlgx and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 24 11:26:39.037: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xmlgx May 24 11:26:39.040: INFO: Scaling statefulset ss to 0 May 24 11:26:59.059: INFO: Waiting for statefulset status.replicas updated to 0 May 24 11:26:59.063: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:26:59.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-xmlgx" for this suite. May 24 11:27:05.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:27:05.186: INFO: namespace: e2e-tests-statefulset-xmlgx, resource: bindings, ignored listing per whitelist May 24 11:27:05.191: INFO: namespace e2e-tests-statefulset-xmlgx deletion completed in 6.110159671s • [SLOW TEST:36.456 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:27:05.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-l6xt STEP: Creating a pod to test atomic-volume-subpath May 24 11:27:05.347: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-l6xt" in namespace "e2e-tests-subpath-n46k7" to be "success or failure" May 24 11:27:05.352: INFO: Pod "pod-subpath-test-configmap-l6xt": Phase="Pending", Reason="", readiness=false. Elapsed: 5.054002ms May 24 11:27:07.356: INFO: Pod "pod-subpath-test-configmap-l6xt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009333393s May 24 11:27:09.361: INFO: Pod "pod-subpath-test-configmap-l6xt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013660131s May 24 11:27:11.365: INFO: Pod "pod-subpath-test-configmap-l6xt": Phase="Running", Reason="", readiness=true. Elapsed: 6.018159384s May 24 11:27:13.370: INFO: Pod "pod-subpath-test-configmap-l6xt": Phase="Running", Reason="", readiness=false. Elapsed: 8.022974414s May 24 11:27:15.374: INFO: Pod "pod-subpath-test-configmap-l6xt": Phase="Running", Reason="", readiness=false. Elapsed: 10.027583739s May 24 11:27:17.379: INFO: Pod "pod-subpath-test-configmap-l6xt": Phase="Running", Reason="", readiness=false. Elapsed: 12.031882783s May 24 11:27:19.399: INFO: Pod "pod-subpath-test-configmap-l6xt": Phase="Running", Reason="", readiness=false. Elapsed: 14.052440598s May 24 11:27:21.403: INFO: Pod "pod-subpath-test-configmap-l6xt": Phase="Running", Reason="", readiness=false. Elapsed: 16.056524452s May 24 11:27:23.408: INFO: Pod "pod-subpath-test-configmap-l6xt": Phase="Running", Reason="", readiness=false. Elapsed: 18.060812037s May 24 11:27:25.412: INFO: Pod "pod-subpath-test-configmap-l6xt": Phase="Running", Reason="", readiness=false. Elapsed: 20.06510559s May 24 11:27:27.416: INFO: Pod "pod-subpath-test-configmap-l6xt": Phase="Running", Reason="", readiness=false. Elapsed: 22.069393778s May 24 11:27:29.420: INFO: Pod "pod-subpath-test-configmap-l6xt": Phase="Running", Reason="", readiness=false. Elapsed: 24.073480222s May 24 11:27:31.424: INFO: Pod "pod-subpath-test-configmap-l6xt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.077050553s STEP: Saw pod success May 24 11:27:31.424: INFO: Pod "pod-subpath-test-configmap-l6xt" satisfied condition "success or failure" May 24 11:27:31.427: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-l6xt container test-container-subpath-configmap-l6xt: STEP: delete the pod May 24 11:27:31.486: INFO: Waiting for pod pod-subpath-test-configmap-l6xt to disappear May 24 11:27:31.502: INFO: Pod pod-subpath-test-configmap-l6xt no longer exists STEP: Deleting pod pod-subpath-test-configmap-l6xt May 24 11:27:31.502: INFO: Deleting pod "pod-subpath-test-configmap-l6xt" in namespace "e2e-tests-subpath-n46k7" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:27:31.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-n46k7" for this suite. May 24 11:27:37.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:27:37.531: INFO: namespace: e2e-tests-subpath-n46k7, resource: bindings, ignored listing per whitelist May 24 11:27:37.586: INFO: namespace e2e-tests-subpath-n46k7 deletion completed in 6.078108414s • [SLOW TEST:32.395 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:27:37.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 24 11:27:42.318: INFO: Successfully updated pod "labelsupdate9242a5de-9db1-11ea-9618-0242ac110016" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:27:46.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-82z9v" for this suite. May 24 11:28:08.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:28:08.452: INFO: namespace: e2e-tests-downward-api-82z9v, resource: bindings, ignored listing per whitelist May 24 11:28:08.491: INFO: namespace e2e-tests-downward-api-82z9v deletion completed in 22.1080894s • [SLOW TEST:30.904 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:28:08.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-a4b7583f-9db1-11ea-9618-0242ac110016 STEP: Creating configMap with name cm-test-opt-upd-a4b7588c-9db1-11ea-9618-0242ac110016 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a4b7583f-9db1-11ea-9618-0242ac110016 STEP: Updating configmap cm-test-opt-upd-a4b7588c-9db1-11ea-9618-0242ac110016 STEP: Creating configMap with name cm-test-opt-create-a4b758a5-9db1-11ea-9618-0242ac110016 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:29:43.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7wcnd" for this suite. May 24 11:30:05.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:30:05.472: INFO: namespace: e2e-tests-configmap-7wcnd, resource: bindings, ignored listing per whitelist May 24 11:30:05.517: INFO: namespace e2e-tests-configmap-7wcnd deletion completed in 22.111408967s • [SLOW TEST:117.026 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:30:05.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-ea6e94df-9db1-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 11:30:05.636: INFO: Waiting up to 5m0s for pod "pod-secrets-ea71341f-9db1-11ea-9618-0242ac110016" in namespace "e2e-tests-secrets-tzjct" to be "success or failure" May 24 11:30:05.651: INFO: Pod "pod-secrets-ea71341f-9db1-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 15.532988ms May 24 11:30:07.656: INFO: Pod "pod-secrets-ea71341f-9db1-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019794199s May 24 11:30:09.660: INFO: Pod "pod-secrets-ea71341f-9db1-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02404886s STEP: Saw pod success May 24 11:30:09.660: INFO: Pod "pod-secrets-ea71341f-9db1-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:30:09.663: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-ea71341f-9db1-11ea-9618-0242ac110016 container secret-env-test: STEP: delete the pod May 24 11:30:09.841: INFO: Waiting for pod pod-secrets-ea71341f-9db1-11ea-9618-0242ac110016 to disappear May 24 11:30:09.878: INFO: Pod pod-secrets-ea71341f-9db1-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:30:09.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tzjct" for this suite. May 24 11:30:15.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:30:15.938: INFO: namespace: e2e-tests-secrets-tzjct, resource: bindings, ignored listing per whitelist May 24 11:30:15.968: INFO: namespace e2e-tests-secrets-tzjct deletion completed in 6.085511496s • [SLOW TEST:10.450 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:30:15.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 24 11:30:16.088: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:16.091: INFO: Number of nodes with available pods: 0 May 24 11:30:16.091: INFO: Node hunter-worker is running more than one daemon pod May 24 11:30:17.096: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:17.100: INFO: Number of nodes with available pods: 0 May 24 11:30:17.100: INFO: Node hunter-worker is running more than one daemon pod May 24 11:30:18.234: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:18.238: INFO: Number of nodes with available pods: 0 May 24 11:30:18.238: INFO: Node hunter-worker is running more than one daemon pod May 24 11:30:19.139: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:19.143: INFO: Number of nodes with available pods: 0 May 24 11:30:19.143: INFO: Node hunter-worker is running more than one daemon pod May 24 11:30:20.096: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:20.099: INFO: Number of nodes with available pods: 0 May 24 11:30:20.099: INFO: Node hunter-worker is running more than one daemon pod May 24 11:30:21.096: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:21.100: INFO: Number of nodes with available pods: 2 May 24 11:30:21.100: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 24 11:30:21.118: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:21.121: INFO: Number of nodes with available pods: 1 May 24 11:30:21.121: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:22.125: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:22.129: INFO: Number of nodes with available pods: 1 May 24 11:30:22.129: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:23.125: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:23.129: INFO: Number of nodes with available pods: 1 May 24 11:30:23.129: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:24.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:24.129: INFO: Number of nodes with available pods: 1 May 24 11:30:24.129: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:25.127: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:25.131: INFO: Number of nodes with available pods: 1 May 24 11:30:25.131: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:26.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:26.130: INFO: Number of nodes with available pods: 1 May 24 11:30:26.130: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:27.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:27.130: INFO: Number of nodes with available pods: 1 May 24 11:30:27.130: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:28.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:28.130: INFO: Number of nodes with available pods: 1 May 24 11:30:28.130: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:29.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:29.129: INFO: Number of nodes with available pods: 1 May 24 11:30:29.129: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:30.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:30.131: INFO: Number of nodes with available pods: 1 May 24 11:30:30.131: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:31.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:31.129: INFO: Number of nodes with available pods: 1 May 24 11:30:31.129: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:32.127: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:32.131: INFO: Number of nodes with available pods: 1 May 24 11:30:32.131: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:33.199: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:33.203: INFO: Number of nodes with available pods: 1 May 24 11:30:33.203: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:34.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:34.129: INFO: Number of nodes with available pods: 1 May 24 11:30:34.129: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:35.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:35.129: INFO: Number of nodes with available pods: 1 May 24 11:30:35.129: INFO: Node hunter-worker2 is running more than one daemon pod May 24 11:30:36.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 11:30:36.130: INFO: Number of nodes with available pods: 2 May 24 11:30:36.130: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bl86m, will wait for the garbage collector to delete the pods May 24 11:30:36.194: INFO: Deleting DaemonSet.extensions daemon-set took: 6.915385ms May 24 11:30:36.294: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.342538ms May 24 11:30:41.798: INFO: Number of nodes with available pods: 0 May 24 11:30:41.798: INFO: Number of running nodes: 0, number of available pods: 0 May 24 11:30:41.801: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bl86m/daemonsets","resourceVersion":"12262951"},"items":null} May 24 11:30:41.803: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bl86m/pods","resourceVersion":"12262951"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:30:41.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-bl86m" for this suite. May 24 11:30:47.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:30:47.932: INFO: namespace: e2e-tests-daemonsets-bl86m, resource: bindings, ignored listing per whitelist May 24 11:30:47.947: INFO: namespace e2e-tests-daemonsets-bl86m deletion completed in 6.098359159s • [SLOW TEST:31.979 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:30:47.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 11:30:48.138: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03c61a73-9db2-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-vhbm2" to be "success or failure" May 24 11:30:48.142: INFO: Pod "downwardapi-volume-03c61a73-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.221736ms May 24 11:30:50.163: INFO: Pod "downwardapi-volume-03c61a73-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024738574s May 24 11:30:52.167: INFO: Pod "downwardapi-volume-03c61a73-9db2-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029063399s STEP: Saw pod success May 24 11:30:52.168: INFO: Pod "downwardapi-volume-03c61a73-9db2-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:30:52.171: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-03c61a73-9db2-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 11:30:52.199: INFO: Waiting for pod downwardapi-volume-03c61a73-9db2-11ea-9618-0242ac110016 to disappear May 24 11:30:52.215: INFO: Pod downwardapi-volume-03c61a73-9db2-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:30:52.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vhbm2" for this suite. May 24 11:30:58.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:30:58.240: INFO: namespace: e2e-tests-projected-vhbm2, resource: bindings, ignored listing per whitelist May 24 11:30:58.310: INFO: namespace e2e-tests-projected-vhbm2 deletion completed in 6.091326122s • [SLOW TEST:10.362 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:30:58.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-jdbw STEP: Creating a pod to test atomic-volume-subpath May 24 11:30:58.463: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jdbw" in namespace "e2e-tests-subpath-pkl24" to be "success or failure" May 24 11:30:58.483: INFO: Pod "pod-subpath-test-secret-jdbw": Phase="Pending", Reason="", readiness=false. Elapsed: 20.106556ms May 24 11:31:00.487: INFO: Pod "pod-subpath-test-secret-jdbw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024609587s May 24 11:31:02.491: INFO: Pod "pod-subpath-test-secret-jdbw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028781432s May 24 11:31:04.496: INFO: Pod "pod-subpath-test-secret-jdbw": Phase="Running", Reason="", readiness=true. Elapsed: 6.033433915s May 24 11:31:06.501: INFO: Pod "pod-subpath-test-secret-jdbw": Phase="Running", Reason="", readiness=false. Elapsed: 8.038530704s May 24 11:31:08.506: INFO: Pod "pod-subpath-test-secret-jdbw": Phase="Running", Reason="", readiness=false. Elapsed: 10.043408506s May 24 11:31:10.510: INFO: Pod "pod-subpath-test-secret-jdbw": Phase="Running", Reason="", readiness=false. Elapsed: 12.04752316s May 24 11:31:12.514: INFO: Pod "pod-subpath-test-secret-jdbw": Phase="Running", Reason="", readiness=false. Elapsed: 14.051845368s May 24 11:31:14.518: INFO: Pod "pod-subpath-test-secret-jdbw": Phase="Running", Reason="", readiness=false. Elapsed: 16.055816802s May 24 11:31:16.522: INFO: Pod "pod-subpath-test-secret-jdbw": Phase="Running", Reason="", readiness=false. Elapsed: 18.059679835s May 24 11:31:18.526: INFO: Pod "pod-subpath-test-secret-jdbw": Phase="Running", Reason="", readiness=false. Elapsed: 20.063166293s May 24 11:31:20.531: INFO: Pod "pod-subpath-test-secret-jdbw": Phase="Running", Reason="", readiness=false. Elapsed: 22.068072326s May 24 11:31:22.535: INFO: Pod "pod-subpath-test-secret-jdbw": Phase="Running", Reason="", readiness=false. Elapsed: 24.072448298s May 24 11:31:24.540: INFO: Pod "pod-subpath-test-secret-jdbw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.077101741s STEP: Saw pod success May 24 11:31:24.540: INFO: Pod "pod-subpath-test-secret-jdbw" satisfied condition "success or failure" May 24 11:31:24.543: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-jdbw container test-container-subpath-secret-jdbw: STEP: delete the pod May 24 11:31:24.580: INFO: Waiting for pod pod-subpath-test-secret-jdbw to disappear May 24 11:31:24.586: INFO: Pod pod-subpath-test-secret-jdbw no longer exists STEP: Deleting pod pod-subpath-test-secret-jdbw May 24 11:31:24.587: INFO: Deleting pod "pod-subpath-test-secret-jdbw" in namespace "e2e-tests-subpath-pkl24" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:31:24.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-pkl24" for this suite. May 24 11:31:30.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:31:30.643: INFO: namespace: e2e-tests-subpath-pkl24, resource: bindings, ignored listing per whitelist May 24 11:31:30.674: INFO: namespace e2e-tests-subpath-pkl24 deletion completed in 6.082723424s • [SLOW TEST:32.364 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:31:30.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-q8cfx STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q8cfx to expose endpoints map[] May 24 11:31:30.832: INFO: Get endpoints failed (4.115507ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 24 11:31:31.836: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q8cfx exposes endpoints map[] (1.008265955s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-q8cfx STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q8cfx to expose endpoints map[pod1:[100]] May 24 11:31:34.876: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q8cfx exposes endpoints map[pod1:[100]] (3.033036741s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-q8cfx STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q8cfx to expose endpoints map[pod1:[100] pod2:[101]] May 24 11:31:38.042: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q8cfx exposes endpoints map[pod1:[100] pod2:[101]] (3.163369804s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-q8cfx STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q8cfx to expose endpoints map[pod2:[101]] May 24 11:31:39.099: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q8cfx exposes endpoints map[pod2:[101]] (1.052192554s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-q8cfx STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q8cfx to expose endpoints map[] May 24 11:31:40.133: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q8cfx exposes endpoints map[] (1.02942642s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:31:40.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-q8cfx" for this suite. May 24 11:32:02.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:32:02.506: INFO: namespace: e2e-tests-services-q8cfx, resource: bindings, ignored listing per whitelist May 24 11:32:02.599: INFO: namespace e2e-tests-services-q8cfx deletion completed in 22.256360919s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:31.925 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:32:02.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 11:32:02.728: INFO: Waiting up to 5m0s for pod "downwardapi-volume-303aa774-9db2-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-vzpss" to be "success or failure" May 24 11:32:02.744: INFO: Pod "downwardapi-volume-303aa774-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 15.855252ms May 24 11:32:04.858: INFO: Pod "downwardapi-volume-303aa774-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129957928s May 24 11:32:06.862: INFO: Pod "downwardapi-volume-303aa774-9db2-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133909334s STEP: Saw pod success May 24 11:32:06.862: INFO: Pod "downwardapi-volume-303aa774-9db2-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:32:06.864: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-303aa774-9db2-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 11:32:07.128: INFO: Waiting for pod downwardapi-volume-303aa774-9db2-11ea-9618-0242ac110016 to disappear May 24 11:32:07.131: INFO: Pod downwardapi-volume-303aa774-9db2-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:32:07.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vzpss" for this suite. May 24 11:32:13.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:32:13.241: INFO: namespace: e2e-tests-projected-vzpss, resource: bindings, ignored listing per whitelist May 24 11:32:13.244: INFO: namespace e2e-tests-projected-vzpss deletion completed in 6.108930415s • [SLOW TEST:10.645 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:32:13.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 24 11:32:13.342: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:32:18.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-lwg88" for this suite. May 24 11:32:24.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:32:24.816: INFO: namespace: e2e-tests-init-container-lwg88, resource: bindings, ignored listing per whitelist May 24 11:32:24.836: INFO: namespace e2e-tests-init-container-lwg88 deletion completed in 6.107320373s • [SLOW TEST:11.592 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:32:24.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 11:32:24.937: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d786e32-9db2-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-bclwc" to be "success or failure" May 24 11:32:24.942: INFO: Pod "downwardapi-volume-3d786e32-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 4.526938ms May 24 11:32:26.946: INFO: Pod "downwardapi-volume-3d786e32-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008550151s May 24 11:32:28.950: INFO: Pod "downwardapi-volume-3d786e32-9db2-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012478246s STEP: Saw pod success May 24 11:32:28.950: INFO: Pod "downwardapi-volume-3d786e32-9db2-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:32:28.953: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-3d786e32-9db2-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 11:32:29.009: INFO: Waiting for pod downwardapi-volume-3d786e32-9db2-11ea-9618-0242ac110016 to disappear May 24 11:32:29.176: INFO: Pod downwardapi-volume-3d786e32-9db2-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:32:29.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bclwc" for this suite. May 24 11:32:35.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:32:35.585: INFO: namespace: e2e-tests-projected-bclwc, resource: bindings, ignored listing per whitelist May 24 11:32:35.639: INFO: namespace e2e-tests-projected-bclwc deletion completed in 6.439872614s • [SLOW TEST:10.803 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:32:35.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 11:32:35.818: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"43eb4c58-9db2-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00265d3f2), BlockOwnerDeletion:(*bool)(0xc00265d3f3)}} May 24 11:32:35.865: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"43ea4d95-9db2-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001776b3a), BlockOwnerDeletion:(*bool)(0xc001776b3b)}} May 24 11:32:35.886: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"43eadd32-9db2-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001776cea), BlockOwnerDeletion:(*bool)(0xc001776ceb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:32:40.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-qbzc6" for this suite. May 24 11:32:47.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:32:47.060: INFO: namespace: e2e-tests-gc-qbzc6, resource: bindings, ignored listing per whitelist May 24 11:32:47.111: INFO: namespace e2e-tests-gc-qbzc6 deletion completed in 6.125355415s • [SLOW TEST:11.472 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:32:47.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:32:53.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-ffw4s" for this suite. May 24 11:32:59.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:32:59.551: INFO: namespace: e2e-tests-namespaces-ffw4s, resource: bindings, ignored listing per whitelist May 24 11:32:59.601: INFO: namespace e2e-tests-namespaces-ffw4s deletion completed in 6.115989937s STEP: Destroying namespace "e2e-tests-nsdeletetest-7tt2p" for this suite. May 24 11:32:59.604: INFO: Namespace e2e-tests-nsdeletetest-7tt2p was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-p2kvk" for this suite. May 24 11:33:05.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:33:05.676: INFO: namespace: e2e-tests-nsdeletetest-p2kvk, resource: bindings, ignored listing per whitelist May 24 11:33:05.725: INFO: namespace e2e-tests-nsdeletetest-p2kvk deletion completed in 6.120785559s • [SLOW TEST:18.613 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:33:05.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 24 11:33:13.917: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:13.937: INFO: Pod pod-with-prestop-exec-hook still exists May 24 11:33:15.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:15.942: INFO: Pod pod-with-prestop-exec-hook still exists May 24 11:33:17.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:17.942: INFO: Pod pod-with-prestop-exec-hook still exists May 24 11:33:19.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:19.943: INFO: Pod pod-with-prestop-exec-hook still exists May 24 11:33:21.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:21.956: INFO: Pod pod-with-prestop-exec-hook still exists May 24 11:33:23.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:23.942: INFO: Pod pod-with-prestop-exec-hook still exists May 24 11:33:25.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:25.942: INFO: Pod pod-with-prestop-exec-hook still exists May 24 11:33:27.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:27.943: INFO: Pod pod-with-prestop-exec-hook still exists May 24 11:33:29.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:29.942: INFO: Pod pod-with-prestop-exec-hook still exists May 24 11:33:31.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:31.941: INFO: Pod pod-with-prestop-exec-hook still exists May 24 11:33:33.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:33.942: INFO: Pod pod-with-prestop-exec-hook still exists May 24 11:33:35.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:35.943: INFO: Pod pod-with-prestop-exec-hook still exists May 24 11:33:37.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:37.943: INFO: Pod pod-with-prestop-exec-hook still exists May 24 11:33:39.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:39.942: INFO: Pod pod-with-prestop-exec-hook still exists May 24 11:33:41.938: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 11:33:41.942: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:33:41.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-qvgzw" for this suite. May 24 11:34:03.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:34:04.043: INFO: namespace: e2e-tests-container-lifecycle-hook-qvgzw, resource: bindings, ignored listing per whitelist May 24 11:34:04.043: INFO: namespace e2e-tests-container-lifecycle-hook-qvgzw deletion completed in 22.090575955s • [SLOW TEST:58.318 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:34:04.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 11:34:04.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 24 11:34:04.301: INFO: stderr: "" May 24 11:34:04.301: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:34:04.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zpf4w" for this suite. May 24 11:34:10.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:34:10.406: INFO: namespace: e2e-tests-kubectl-zpf4w, resource: bindings, ignored listing per whitelist May 24 11:34:10.424: INFO: namespace e2e-tests-kubectl-zpf4w deletion completed in 6.117116274s • [SLOW TEST:6.381 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:34:10.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 11:34:10.534: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c682447-9db2-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-n947x" to be "success or failure" May 24 11:34:10.537: INFO: Pod "downwardapi-volume-7c682447-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.262395ms May 24 11:34:12.541: INFO: Pod "downwardapi-volume-7c682447-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007465453s May 24 11:34:14.576: INFO: Pod "downwardapi-volume-7c682447-9db2-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042566519s STEP: Saw pod success May 24 11:34:14.576: INFO: Pod "downwardapi-volume-7c682447-9db2-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:34:14.579: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7c682447-9db2-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 11:34:14.598: INFO: Waiting for pod downwardapi-volume-7c682447-9db2-11ea-9618-0242ac110016 to disappear May 24 11:34:14.644: INFO: Pod downwardapi-volume-7c682447-9db2-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:34:14.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n947x" for this suite. May 24 11:34:20.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:34:20.737: INFO: namespace: e2e-tests-projected-n947x, resource: bindings, ignored listing per whitelist May 24 11:34:20.753: INFO: namespace e2e-tests-projected-n947x deletion completed in 6.104915507s • [SLOW TEST:10.329 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:34:20.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-75trs/configmap-test-82922bad-9db2-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 11:34:20.898: INFO: Waiting up to 5m0s for pod "pod-configmaps-82946f0a-9db2-11ea-9618-0242ac110016" in namespace "e2e-tests-configmap-75trs" to be "success or failure" May 24 11:34:20.932: INFO: Pod "pod-configmaps-82946f0a-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 33.833145ms May 24 11:34:22.936: INFO: Pod "pod-configmaps-82946f0a-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037670378s May 24 11:34:24.940: INFO: Pod "pod-configmaps-82946f0a-9db2-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041434745s STEP: Saw pod success May 24 11:34:24.940: INFO: Pod "pod-configmaps-82946f0a-9db2-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:34:24.942: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-82946f0a-9db2-11ea-9618-0242ac110016 container env-test: STEP: delete the pod May 24 11:34:24.957: INFO: Waiting for pod pod-configmaps-82946f0a-9db2-11ea-9618-0242ac110016 to disappear May 24 11:34:24.962: INFO: Pod pod-configmaps-82946f0a-9db2-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:34:24.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-75trs" for this suite. May 24 11:34:31.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:34:31.079: INFO: namespace: e2e-tests-configmap-75trs, resource: bindings, ignored listing per whitelist May 24 11:34:31.145: INFO: namespace e2e-tests-configmap-75trs deletion completed in 6.152618627s • [SLOW TEST:10.392 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:34:31.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:34:35.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-ln76z" for this suite. May 24 11:34:41.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:34:41.503: INFO: namespace: e2e-tests-kubelet-test-ln76z, resource: bindings, ignored listing per whitelist May 24 11:34:41.525: INFO: namespace e2e-tests-kubelet-test-ln76z deletion completed in 6.121463192s • [SLOW TEST:10.380 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:34:41.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 11:34:41.658: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ef730e7-9db2-11ea-9618-0242ac110016" in namespace "e2e-tests-downward-api-r2hqk" to be "success or failure" May 24 11:34:41.663: INFO: Pod "downwardapi-volume-8ef730e7-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 4.898195ms May 24 11:34:43.668: INFO: Pod "downwardapi-volume-8ef730e7-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009431817s May 24 11:34:45.672: INFO: Pod "downwardapi-volume-8ef730e7-9db2-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014100972s STEP: Saw pod success May 24 11:34:45.672: INFO: Pod "downwardapi-volume-8ef730e7-9db2-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:34:45.676: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-8ef730e7-9db2-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 11:34:45.694: INFO: Waiting for pod downwardapi-volume-8ef730e7-9db2-11ea-9618-0242ac110016 to disappear May 24 11:34:45.699: INFO: Pod downwardapi-volume-8ef730e7-9db2-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:34:45.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-r2hqk" for this suite. May 24 11:34:51.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:34:51.740: INFO: namespace: e2e-tests-downward-api-r2hqk, resource: bindings, ignored listing per whitelist May 24 11:34:51.808: INFO: namespace e2e-tests-downward-api-r2hqk deletion completed in 6.105880522s • [SLOW TEST:10.283 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:34:51.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:34:55.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-jqhj7" for this suite. May 24 11:35:45.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:35:45.990: INFO: namespace: e2e-tests-kubelet-test-jqhj7, resource: bindings, ignored listing per whitelist May 24 11:35:46.046: INFO: namespace e2e-tests-kubelet-test-jqhj7 deletion completed in 50.090223264s • [SLOW TEST:54.238 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:35:46.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 24 11:35:46.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-pwczh' May 24 11:35:48.740: INFO: stderr: "" May 24 11:35:48.740: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 24 11:35:48.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-pwczh' May 24 11:36:01.274: INFO: stderr: "" May 24 11:36:01.274: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:36:01.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pwczh" for this suite. May 24 11:36:07.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:36:07.409: INFO: namespace: e2e-tests-kubectl-pwczh, resource: bindings, ignored listing per whitelist May 24 11:36:07.449: INFO: namespace e2e-tests-kubectl-pwczh deletion completed in 6.095222033s • [SLOW TEST:21.403 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:36:07.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 11:36:07.551: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 24 11:36:12.556: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 24 11:36:12.556: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 24 11:36:12.582: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-vlszt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vlszt/deployments/test-cleanup-deployment,UID:c52709ca-9db2-11ea-99e8-0242ac110002,ResourceVersion:12264111,Generation:1,CreationTimestamp:2020-05-24 11:36:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 24 11:36:12.587: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 24 11:36:12.587: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 24 11:36:12.587: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-vlszt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vlszt/replicasets/test-cleanup-controller,UID:c227a418-9db2-11ea-99e8-0242ac110002,ResourceVersion:12264112,Generation:1,CreationTimestamp:2020-05-24 11:36:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c52709ca-9db2-11ea-99e8-0242ac110002 0xc0028e8def 0xc0028e8e00}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 24 11:36:12.594: INFO: Pod "test-cleanup-controller-z7czq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-z7czq,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-vlszt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlszt/pods/test-cleanup-controller-z7czq,UID:c22adaa7-9db2-11ea-99e8-0242ac110002,ResourceVersion:12264105,Generation:0,CreationTimestamp:2020-05-24 11:36:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller c227a418-9db2-11ea-99e8-0242ac110002 0xc00286d907 0xc00286d908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z948f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z948f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z948f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00286d980} {node.kubernetes.io/unreachable Exists NoExecute 0xc00286d9a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:36:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:36:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:36:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:36:07 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.73,StartTime:2020-05-24 11:36:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-24 11:36:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://301798111fbf03152738bb048a6102320149276c2ead60dd6357fd63f578d5e4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:36:12.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-vlszt" for this suite. May 24 11:36:18.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:36:18.748: INFO: namespace: e2e-tests-deployment-vlszt, resource: bindings, ignored listing per whitelist May 24 11:36:18.770: INFO: namespace e2e-tests-deployment-vlszt deletion completed in 6.157842557s • [SLOW TEST:11.321 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:36:18.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 24 11:36:28.924: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mkvsc PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:36:28.924: INFO: >>> kubeConfig: /root/.kube/config I0524 11:36:28.960673 6 log.go:172] (0xc0008be4d0) (0xc00175b040) Create stream I0524 11:36:28.960706 6 log.go:172] (0xc0008be4d0) (0xc00175b040) Stream added, broadcasting: 1 I0524 11:36:28.962981 6 log.go:172] (0xc0008be4d0) Reply frame received for 1 I0524 11:36:28.963018 6 log.go:172] (0xc0008be4d0) (0xc00197a1e0) Create stream I0524 11:36:28.963031 6 log.go:172] (0xc0008be4d0) (0xc00197a1e0) Stream added, broadcasting: 3 I0524 11:36:28.963913 6 log.go:172] (0xc0008be4d0) Reply frame received for 3 I0524 11:36:28.963947 6 log.go:172] (0xc0008be4d0) (0xc0017bfcc0) Create stream I0524 11:36:28.963960 6 log.go:172] (0xc0008be4d0) (0xc0017bfcc0) Stream added, broadcasting: 5 I0524 11:36:28.964678 6 log.go:172] (0xc0008be4d0) Reply frame received for 5 I0524 11:36:29.040442 6 log.go:172] (0xc0008be4d0) Data frame received for 5 I0524 11:36:29.040500 6 log.go:172] (0xc0017bfcc0) (5) Data frame handling I0524 11:36:29.040542 6 log.go:172] (0xc0008be4d0) Data frame received for 3 I0524 11:36:29.040563 6 log.go:172] (0xc00197a1e0) (3) Data frame handling I0524 11:36:29.040590 6 log.go:172] (0xc00197a1e0) (3) Data frame sent I0524 11:36:29.040895 6 log.go:172] (0xc0008be4d0) Data frame received for 3 I0524 11:36:29.040967 6 log.go:172] (0xc00197a1e0) (3) Data frame handling I0524 11:36:29.043082 6 log.go:172] (0xc0008be4d0) Data frame received for 1 I0524 11:36:29.043115 6 log.go:172] (0xc00175b040) (1) Data frame handling I0524 11:36:29.043136 6 log.go:172] (0xc00175b040) (1) Data frame sent I0524 11:36:29.043166 6 log.go:172] (0xc0008be4d0) (0xc00175b040) Stream removed, broadcasting: 1 I0524 11:36:29.043215 6 log.go:172] (0xc0008be4d0) Go away received I0524 11:36:29.043268 6 log.go:172] (0xc0008be4d0) (0xc00175b040) Stream removed, broadcasting: 1 I0524 11:36:29.043286 6 log.go:172] (0xc0008be4d0) (0xc00197a1e0) Stream removed, broadcasting: 3 I0524 11:36:29.043300 6 log.go:172] (0xc0008be4d0) (0xc0017bfcc0) Stream removed, broadcasting: 5 May 24 11:36:29.043: INFO: Exec stderr: "" May 24 11:36:29.043: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mkvsc PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:36:29.043: INFO: >>> kubeConfig: /root/.kube/config I0524 11:36:29.074485 6 log.go:172] (0xc0008be9a0) (0xc00175b2c0) Create stream I0524 11:36:29.074531 6 log.go:172] (0xc0008be9a0) (0xc00175b2c0) Stream added, broadcasting: 1 I0524 11:36:29.077256 6 log.go:172] (0xc0008be9a0) Reply frame received for 1 I0524 11:36:29.077334 6 log.go:172] (0xc0008be9a0) (0xc001cba1e0) Create stream I0524 11:36:29.077348 6 log.go:172] (0xc0008be9a0) (0xc001cba1e0) Stream added, broadcasting: 3 I0524 11:36:29.078356 6 log.go:172] (0xc0008be9a0) Reply frame received for 3 I0524 11:36:29.078388 6 log.go:172] (0xc0008be9a0) (0xc0017bfd60) Create stream I0524 11:36:29.078401 6 log.go:172] (0xc0008be9a0) (0xc0017bfd60) Stream added, broadcasting: 5 I0524 11:36:29.079205 6 log.go:172] (0xc0008be9a0) Reply frame received for 5 I0524 11:36:29.142498 6 log.go:172] (0xc0008be9a0) Data frame received for 5 I0524 11:36:29.142530 6 log.go:172] (0xc0017bfd60) (5) Data frame handling I0524 11:36:29.142554 6 log.go:172] (0xc0008be9a0) Data frame received for 3 I0524 11:36:29.142562 6 log.go:172] (0xc001cba1e0) (3) Data frame handling I0524 11:36:29.142570 6 log.go:172] (0xc001cba1e0) (3) Data frame sent I0524 11:36:29.142578 6 log.go:172] (0xc0008be9a0) Data frame received for 3 I0524 11:36:29.142585 6 log.go:172] (0xc001cba1e0) (3) Data frame handling I0524 11:36:29.143630 6 log.go:172] (0xc0008be9a0) Data frame received for 1 I0524 11:36:29.143647 6 log.go:172] (0xc00175b2c0) (1) Data frame handling I0524 11:36:29.143660 6 log.go:172] (0xc00175b2c0) (1) Data frame sent I0524 11:36:29.143671 6 log.go:172] (0xc0008be9a0) (0xc00175b2c0) Stream removed, broadcasting: 1 I0524 11:36:29.143682 6 log.go:172] (0xc0008be9a0) Go away received I0524 11:36:29.143796 6 log.go:172] (0xc0008be9a0) (0xc00175b2c0) Stream removed, broadcasting: 1 I0524 11:36:29.143812 6 log.go:172] (0xc0008be9a0) (0xc001cba1e0) Stream removed, broadcasting: 3 I0524 11:36:29.143821 6 log.go:172] (0xc0008be9a0) (0xc0017bfd60) Stream removed, broadcasting: 5 May 24 11:36:29.143: INFO: Exec stderr: "" May 24 11:36:29.143: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mkvsc PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:36:29.143: INFO: >>> kubeConfig: /root/.kube/config I0524 11:36:29.233064 6 log.go:172] (0xc0006f3290) (0xc001db5a40) Create stream I0524 11:36:29.233097 6 log.go:172] (0xc0006f3290) (0xc001db5a40) Stream added, broadcasting: 1 I0524 11:36:29.235099 6 log.go:172] (0xc0006f3290) Reply frame received for 1 I0524 11:36:29.235138 6 log.go:172] (0xc0006f3290) (0xc00197a280) Create stream I0524 11:36:29.235151 6 log.go:172] (0xc0006f3290) (0xc00197a280) Stream added, broadcasting: 3 I0524 11:36:29.235992 6 log.go:172] (0xc0006f3290) Reply frame received for 3 I0524 11:36:29.236022 6 log.go:172] (0xc0006f3290) (0xc00197a320) Create stream I0524 11:36:29.236035 6 log.go:172] (0xc0006f3290) (0xc00197a320) Stream added, broadcasting: 5 I0524 11:36:29.237038 6 log.go:172] (0xc0006f3290) Reply frame received for 5 I0524 11:36:29.301828 6 log.go:172] (0xc0006f3290) Data frame received for 5 I0524 11:36:29.301870 6 log.go:172] (0xc00197a320) (5) Data frame handling I0524 11:36:29.301907 6 log.go:172] (0xc0006f3290) Data frame received for 3 I0524 11:36:29.301943 6 log.go:172] (0xc00197a280) (3) Data frame handling I0524 11:36:29.301967 6 log.go:172] (0xc00197a280) (3) Data frame sent I0524 11:36:29.301981 6 log.go:172] (0xc0006f3290) Data frame received for 3 I0524 11:36:29.301993 6 log.go:172] (0xc00197a280) (3) Data frame handling I0524 11:36:29.303689 6 log.go:172] (0xc0006f3290) Data frame received for 1 I0524 11:36:29.303771 6 log.go:172] (0xc001db5a40) (1) Data frame handling I0524 11:36:29.303808 6 log.go:172] (0xc001db5a40) (1) Data frame sent I0524 11:36:29.303837 6 log.go:172] (0xc0006f3290) (0xc001db5a40) Stream removed, broadcasting: 1 I0524 11:36:29.303874 6 log.go:172] (0xc0006f3290) Go away received I0524 11:36:29.304081 6 log.go:172] (0xc0006f3290) (0xc001db5a40) Stream removed, broadcasting: 1 I0524 11:36:29.304113 6 log.go:172] (0xc0006f3290) (0xc00197a280) Stream removed, broadcasting: 3 I0524 11:36:29.304154 6 log.go:172] (0xc0006f3290) (0xc00197a320) Stream removed, broadcasting: 5 May 24 11:36:29.304: INFO: Exec stderr: "" May 24 11:36:29.304: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mkvsc PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:36:29.304: INFO: >>> kubeConfig: /root/.kube/config I0524 11:36:29.337505 6 log.go:172] (0xc000b6e2c0) (0xc001cba500) Create stream I0524 11:36:29.337539 6 log.go:172] (0xc000b6e2c0) (0xc001cba500) Stream added, broadcasting: 1 I0524 11:36:29.339681 6 log.go:172] (0xc000b6e2c0) Reply frame received for 1 I0524 11:36:29.339720 6 log.go:172] (0xc000b6e2c0) (0xc00175b360) Create stream I0524 11:36:29.339729 6 log.go:172] (0xc000b6e2c0) (0xc00175b360) Stream added, broadcasting: 3 I0524 11:36:29.340627 6 log.go:172] (0xc000b6e2c0) Reply frame received for 3 I0524 11:36:29.340666 6 log.go:172] (0xc000b6e2c0) (0xc001db5ae0) Create stream I0524 11:36:29.340680 6 log.go:172] (0xc000b6e2c0) (0xc001db5ae0) Stream added, broadcasting: 5 I0524 11:36:29.341802 6 log.go:172] (0xc000b6e2c0) Reply frame received for 5 I0524 11:36:29.401780 6 log.go:172] (0xc000b6e2c0) Data frame received for 3 I0524 11:36:29.401803 6 log.go:172] (0xc00175b360) (3) Data frame handling I0524 11:36:29.401972 6 log.go:172] (0xc00175b360) (3) Data frame sent I0524 11:36:29.402130 6 log.go:172] (0xc000b6e2c0) Data frame received for 5 I0524 11:36:29.402141 6 log.go:172] (0xc001db5ae0) (5) Data frame handling I0524 11:36:29.402169 6 log.go:172] (0xc000b6e2c0) Data frame received for 3 I0524 11:36:29.402180 6 log.go:172] (0xc00175b360) (3) Data frame handling I0524 11:36:29.403910 6 log.go:172] (0xc000b6e2c0) Data frame received for 1 I0524 11:36:29.403927 6 log.go:172] (0xc001cba500) (1) Data frame handling I0524 11:36:29.403937 6 log.go:172] (0xc001cba500) (1) Data frame sent I0524 11:36:29.403954 6 log.go:172] (0xc000b6e2c0) (0xc001cba500) Stream removed, broadcasting: 1 I0524 11:36:29.403972 6 log.go:172] (0xc000b6e2c0) Go away received I0524 11:36:29.404033 6 log.go:172] (0xc000b6e2c0) (0xc001cba500) Stream removed, broadcasting: 1 I0524 11:36:29.404058 6 log.go:172] (0xc000b6e2c0) (0xc00175b360) Stream removed, broadcasting: 3 I0524 11:36:29.404068 6 log.go:172] (0xc000b6e2c0) (0xc001db5ae0) Stream removed, broadcasting: 5 May 24 11:36:29.404: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 24 11:36:29.404: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mkvsc PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:36:29.404: INFO: >>> kubeConfig: /root/.kube/config I0524 11:36:29.434403 6 log.go:172] (0xc0007da2c0) (0xc00197a5a0) Create stream I0524 11:36:29.434434 6 log.go:172] (0xc0007da2c0) (0xc00197a5a0) Stream added, broadcasting: 1 I0524 11:36:29.437305 6 log.go:172] (0xc0007da2c0) Reply frame received for 1 I0524 11:36:29.437370 6 log.go:172] (0xc0007da2c0) (0xc00175b400) Create stream I0524 11:36:29.437387 6 log.go:172] (0xc0007da2c0) (0xc00175b400) Stream added, broadcasting: 3 I0524 11:36:29.438193 6 log.go:172] (0xc0007da2c0) Reply frame received for 3 I0524 11:36:29.438236 6 log.go:172] (0xc0007da2c0) (0xc001cba5a0) Create stream I0524 11:36:29.438256 6 log.go:172] (0xc0007da2c0) (0xc001cba5a0) Stream added, broadcasting: 5 I0524 11:36:29.439163 6 log.go:172] (0xc0007da2c0) Reply frame received for 5 I0524 11:36:29.504899 6 log.go:172] (0xc0007da2c0) Data frame received for 5 I0524 11:36:29.504954 6 log.go:172] (0xc001cba5a0) (5) Data frame handling I0524 11:36:29.504986 6 log.go:172] (0xc0007da2c0) Data frame received for 3 I0524 11:36:29.505003 6 log.go:172] (0xc00175b400) (3) Data frame handling I0524 11:36:29.505026 6 log.go:172] (0xc00175b400) (3) Data frame sent I0524 11:36:29.505042 6 log.go:172] (0xc0007da2c0) Data frame received for 3 I0524 11:36:29.505056 6 log.go:172] (0xc00175b400) (3) Data frame handling I0524 11:36:29.506678 6 log.go:172] (0xc0007da2c0) Data frame received for 1 I0524 11:36:29.506702 6 log.go:172] (0xc00197a5a0) (1) Data frame handling I0524 11:36:29.506716 6 log.go:172] (0xc00197a5a0) (1) Data frame sent I0524 11:36:29.506738 6 log.go:172] (0xc0007da2c0) (0xc00197a5a0) Stream removed, broadcasting: 1 I0524 11:36:29.506755 6 log.go:172] (0xc0007da2c0) Go away received I0524 11:36:29.506866 6 log.go:172] (0xc0007da2c0) (0xc00197a5a0) Stream removed, broadcasting: 1 I0524 11:36:29.506889 6 log.go:172] (0xc0007da2c0) (0xc00175b400) Stream removed, broadcasting: 3 I0524 11:36:29.506904 6 log.go:172] (0xc0007da2c0) (0xc001cba5a0) Stream removed, broadcasting: 5 May 24 11:36:29.506: INFO: Exec stderr: "" May 24 11:36:29.506: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mkvsc PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:36:29.507: INFO: >>> kubeConfig: /root/.kube/config I0524 11:36:29.541700 6 log.go:172] (0xc0008bee70) (0xc00175b7c0) Create stream I0524 11:36:29.541731 6 log.go:172] (0xc0008bee70) (0xc00175b7c0) Stream added, broadcasting: 1 I0524 11:36:29.544344 6 log.go:172] (0xc0008bee70) Reply frame received for 1 I0524 11:36:29.544383 6 log.go:172] (0xc0008bee70) (0xc001cba640) Create stream I0524 11:36:29.544397 6 log.go:172] (0xc0008bee70) (0xc001cba640) Stream added, broadcasting: 3 I0524 11:36:29.545663 6 log.go:172] (0xc0008bee70) Reply frame received for 3 I0524 11:36:29.545722 6 log.go:172] (0xc0008bee70) (0xc001db5b80) Create stream I0524 11:36:29.545749 6 log.go:172] (0xc0008bee70) (0xc001db5b80) Stream added, broadcasting: 5 I0524 11:36:29.546747 6 log.go:172] (0xc0008bee70) Reply frame received for 5 I0524 11:36:29.605264 6 log.go:172] (0xc0008bee70) Data frame received for 3 I0524 11:36:29.605306 6 log.go:172] (0xc001cba640) (3) Data frame handling I0524 11:36:29.605328 6 log.go:172] (0xc001cba640) (3) Data frame sent I0524 11:36:29.605343 6 log.go:172] (0xc0008bee70) Data frame received for 3 I0524 11:36:29.605354 6 log.go:172] (0xc001cba640) (3) Data frame handling I0524 11:36:29.605510 6 log.go:172] (0xc0008bee70) Data frame received for 5 I0524 11:36:29.605539 6 log.go:172] (0xc001db5b80) (5) Data frame handling I0524 11:36:29.607091 6 log.go:172] (0xc0008bee70) Data frame received for 1 I0524 11:36:29.607115 6 log.go:172] (0xc00175b7c0) (1) Data frame handling I0524 11:36:29.607138 6 log.go:172] (0xc00175b7c0) (1) Data frame sent I0524 11:36:29.607368 6 log.go:172] (0xc0008bee70) (0xc00175b7c0) Stream removed, broadcasting: 1 I0524 11:36:29.607457 6 log.go:172] (0xc0008bee70) Go away received I0524 11:36:29.607578 6 log.go:172] (0xc0008bee70) (0xc00175b7c0) Stream removed, broadcasting: 1 I0524 11:36:29.607614 6 log.go:172] (0xc0008bee70) (0xc001cba640) Stream removed, broadcasting: 3 I0524 11:36:29.607637 6 log.go:172] (0xc0008bee70) (0xc001db5b80) Stream removed, broadcasting: 5 May 24 11:36:29.607: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 24 11:36:29.607: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mkvsc PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:36:29.607: INFO: >>> kubeConfig: /root/.kube/config I0524 11:36:29.639138 6 log.go:172] (0xc0008bf340) (0xc00175ba40) Create stream I0524 11:36:29.639162 6 log.go:172] (0xc0008bf340) (0xc00175ba40) Stream added, broadcasting: 1 I0524 11:36:29.641706 6 log.go:172] (0xc0008bf340) Reply frame received for 1 I0524 11:36:29.641748 6 log.go:172] (0xc0008bf340) (0xc00197a640) Create stream I0524 11:36:29.641761 6 log.go:172] (0xc0008bf340) (0xc00197a640) Stream added, broadcasting: 3 I0524 11:36:29.642644 6 log.go:172] (0xc0008bf340) Reply frame received for 3 I0524 11:36:29.642672 6 log.go:172] (0xc0008bf340) (0xc001db5c20) Create stream I0524 11:36:29.642682 6 log.go:172] (0xc0008bf340) (0xc001db5c20) Stream added, broadcasting: 5 I0524 11:36:29.643483 6 log.go:172] (0xc0008bf340) Reply frame received for 5 I0524 11:36:29.695397 6 log.go:172] (0xc0008bf340) Data frame received for 5 I0524 11:36:29.695419 6 log.go:172] (0xc001db5c20) (5) Data frame handling I0524 11:36:29.695467 6 log.go:172] (0xc0008bf340) Data frame received for 3 I0524 11:36:29.695499 6 log.go:172] (0xc00197a640) (3) Data frame handling I0524 11:36:29.695522 6 log.go:172] (0xc00197a640) (3) Data frame sent I0524 11:36:29.695551 6 log.go:172] (0xc0008bf340) Data frame received for 3 I0524 11:36:29.695582 6 log.go:172] (0xc00197a640) (3) Data frame handling I0524 11:36:29.697314 6 log.go:172] (0xc0008bf340) Data frame received for 1 I0524 11:36:29.697350 6 log.go:172] (0xc00175ba40) (1) Data frame handling I0524 11:36:29.697367 6 log.go:172] (0xc00175ba40) (1) Data frame sent I0524 11:36:29.697385 6 log.go:172] (0xc0008bf340) (0xc00175ba40) Stream removed, broadcasting: 1 I0524 11:36:29.697404 6 log.go:172] (0xc0008bf340) Go away received I0524 11:36:29.697720 6 log.go:172] (0xc0008bf340) (0xc00175ba40) Stream removed, broadcasting: 1 I0524 11:36:29.697742 6 log.go:172] (0xc0008bf340) (0xc00197a640) Stream removed, broadcasting: 3 I0524 11:36:29.697754 6 log.go:172] (0xc0008bf340) (0xc001db5c20) Stream removed, broadcasting: 5 May 24 11:36:29.697: INFO: Exec stderr: "" May 24 11:36:29.697: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mkvsc PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:36:29.697: INFO: >>> kubeConfig: /root/.kube/config I0524 11:36:29.758947 6 log.go:172] (0xc0008bf810) (0xc00175bcc0) Create stream I0524 11:36:29.758975 6 log.go:172] (0xc0008bf810) (0xc00175bcc0) Stream added, broadcasting: 1 I0524 11:36:29.765484 6 log.go:172] (0xc0008bf810) Reply frame received for 1 I0524 11:36:29.765529 6 log.go:172] (0xc0008bf810) (0xc001db5cc0) Create stream I0524 11:36:29.765538 6 log.go:172] (0xc0008bf810) (0xc001db5cc0) Stream added, broadcasting: 3 I0524 11:36:29.766508 6 log.go:172] (0xc0008bf810) Reply frame received for 3 I0524 11:36:29.766554 6 log.go:172] (0xc0008bf810) (0xc00197a6e0) Create stream I0524 11:36:29.766562 6 log.go:172] (0xc0008bf810) (0xc00197a6e0) Stream added, broadcasting: 5 I0524 11:36:29.767404 6 log.go:172] (0xc0008bf810) Reply frame received for 5 I0524 11:36:29.842272 6 log.go:172] (0xc0008bf810) Data frame received for 3 I0524 11:36:29.842339 6 log.go:172] (0xc001db5cc0) (3) Data frame handling I0524 11:36:29.842367 6 log.go:172] (0xc001db5cc0) (3) Data frame sent I0524 11:36:29.842384 6 log.go:172] (0xc0008bf810) Data frame received for 3 I0524 11:36:29.842399 6 log.go:172] (0xc001db5cc0) (3) Data frame handling I0524 11:36:29.842437 6 log.go:172] (0xc0008bf810) Data frame received for 5 I0524 11:36:29.842483 6 log.go:172] (0xc00197a6e0) (5) Data frame handling I0524 11:36:29.844088 6 log.go:172] (0xc0008bf810) Data frame received for 1 I0524 11:36:29.844132 6 log.go:172] (0xc00175bcc0) (1) Data frame handling I0524 11:36:29.844174 6 log.go:172] (0xc00175bcc0) (1) Data frame sent I0524 11:36:29.844203 6 log.go:172] (0xc0008bf810) (0xc00175bcc0) Stream removed, broadcasting: 1 I0524 11:36:29.844226 6 log.go:172] (0xc0008bf810) Go away received I0524 11:36:29.844329 6 log.go:172] (0xc0008bf810) (0xc00175bcc0) Stream removed, broadcasting: 1 I0524 11:36:29.844353 6 log.go:172] (0xc0008bf810) (0xc001db5cc0) Stream removed, broadcasting: 3 I0524 11:36:29.844364 6 log.go:172] (0xc0008bf810) (0xc00197a6e0) Stream removed, broadcasting: 5 May 24 11:36:29.844: INFO: Exec stderr: "" May 24 11:36:29.844: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mkvsc PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:36:29.844: INFO: >>> kubeConfig: /root/.kube/config I0524 11:36:29.875723 6 log.go:172] (0xc0008bfce0) (0xc00044e3c0) Create stream I0524 11:36:29.875742 6 log.go:172] (0xc0008bfce0) (0xc00044e3c0) Stream added, broadcasting: 1 I0524 11:36:29.878744 6 log.go:172] (0xc0008bfce0) Reply frame received for 1 I0524 11:36:29.878783 6 log.go:172] (0xc0008bfce0) (0xc00044e640) Create stream I0524 11:36:29.878794 6 log.go:172] (0xc0008bfce0) (0xc00044e640) Stream added, broadcasting: 3 I0524 11:36:29.879862 6 log.go:172] (0xc0008bfce0) Reply frame received for 3 I0524 11:36:29.879897 6 log.go:172] (0xc0008bfce0) (0xc00044e8c0) Create stream I0524 11:36:29.879910 6 log.go:172] (0xc0008bfce0) (0xc00044e8c0) Stream added, broadcasting: 5 I0524 11:36:29.881000 6 log.go:172] (0xc0008bfce0) Reply frame received for 5 I0524 11:36:29.950201 6 log.go:172] (0xc0008bfce0) Data frame received for 5 I0524 11:36:29.950247 6 log.go:172] (0xc00044e8c0) (5) Data frame handling I0524 11:36:29.951063 6 log.go:172] (0xc0008bfce0) Data frame received for 3 I0524 11:36:29.951088 6 log.go:172] (0xc00044e640) (3) Data frame handling I0524 11:36:29.951105 6 log.go:172] (0xc00044e640) (3) Data frame sent I0524 11:36:29.951122 6 log.go:172] (0xc0008bfce0) Data frame received for 3 I0524 11:36:29.951136 6 log.go:172] (0xc00044e640) (3) Data frame handling I0524 11:36:29.955392 6 log.go:172] (0xc0008bfce0) Data frame received for 1 I0524 11:36:29.955417 6 log.go:172] (0xc00044e3c0) (1) Data frame handling I0524 11:36:29.955430 6 log.go:172] (0xc00044e3c0) (1) Data frame sent I0524 11:36:29.955448 6 log.go:172] (0xc0008bfce0) (0xc00044e3c0) Stream removed, broadcasting: 1 I0524 11:36:29.955540 6 log.go:172] (0xc0008bfce0) (0xc00044e3c0) Stream removed, broadcasting: 1 I0524 11:36:29.955559 6 log.go:172] (0xc0008bfce0) (0xc00044e640) Stream removed, broadcasting: 3 I0524 11:36:29.955578 6 log.go:172] (0xc0008bfce0) (0xc00044e8c0) Stream removed, broadcasting: 5 May 24 11:36:29.955: INFO: Exec stderr: "" I0524 11:36:29.955618 6 log.go:172] (0xc0008bfce0) Go away received May 24 11:36:29.955: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mkvsc PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:36:29.955: INFO: >>> kubeConfig: /root/.kube/config I0524 11:36:29.979812 6 log.go:172] (0xc002014210) (0xc00044f040) Create stream I0524 11:36:29.979836 6 log.go:172] (0xc002014210) (0xc00044f040) Stream added, broadcasting: 1 I0524 11:36:29.981405 6 log.go:172] (0xc002014210) Reply frame received for 1 I0524 11:36:29.981431 6 log.go:172] (0xc002014210) (0xc001db5d60) Create stream I0524 11:36:29.981440 6 log.go:172] (0xc002014210) (0xc001db5d60) Stream added, broadcasting: 3 I0524 11:36:29.982189 6 log.go:172] (0xc002014210) Reply frame received for 3 I0524 11:36:29.982216 6 log.go:172] (0xc002014210) (0xc0017bfe00) Create stream I0524 11:36:29.982225 6 log.go:172] (0xc002014210) (0xc0017bfe00) Stream added, broadcasting: 5 I0524 11:36:29.982910 6 log.go:172] (0xc002014210) Reply frame received for 5 I0524 11:36:30.040745 6 log.go:172] (0xc002014210) Data frame received for 3 I0524 11:36:30.040786 6 log.go:172] (0xc001db5d60) (3) Data frame handling I0524 11:36:30.040822 6 log.go:172] (0xc001db5d60) (3) Data frame sent I0524 11:36:30.040880 6 log.go:172] (0xc002014210) Data frame received for 3 I0524 11:36:30.040928 6 log.go:172] (0xc001db5d60) (3) Data frame handling I0524 11:36:30.040991 6 log.go:172] (0xc002014210) Data frame received for 5 I0524 11:36:30.041019 6 log.go:172] (0xc0017bfe00) (5) Data frame handling I0524 11:36:30.042483 6 log.go:172] (0xc002014210) Data frame received for 1 I0524 11:36:30.042517 6 log.go:172] (0xc00044f040) (1) Data frame handling I0524 11:36:30.042560 6 log.go:172] (0xc00044f040) (1) Data frame sent I0524 11:36:30.042590 6 log.go:172] (0xc002014210) (0xc00044f040) Stream removed, broadcasting: 1 I0524 11:36:30.042613 6 log.go:172] (0xc002014210) Go away received I0524 11:36:30.042775 6 log.go:172] (0xc002014210) (0xc00044f040) Stream removed, broadcasting: 1 I0524 11:36:30.042812 6 log.go:172] (0xc002014210) (0xc001db5d60) Stream removed, broadcasting: 3 I0524 11:36:30.042826 6 log.go:172] (0xc002014210) (0xc0017bfe00) Stream removed, broadcasting: 5 May 24 11:36:30.042: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:36:30.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-mkvsc" for this suite. May 24 11:37:32.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:37:32.111: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-mkvsc, resource: bindings, ignored listing per whitelist May 24 11:37:32.170: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-mkvsc deletion completed in 1m2.102822763s • [SLOW TEST:73.400 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:37:32.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f4abd66b-9db2-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 11:37:32.386: INFO: Waiting up to 5m0s for pod "pod-secrets-f4b9ef52-9db2-11ea-9618-0242ac110016" in namespace "e2e-tests-secrets-8jtcv" to be "success or failure" May 24 11:37:32.402: INFO: Pod "pod-secrets-f4b9ef52-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 15.479066ms May 24 11:37:34.406: INFO: Pod "pod-secrets-f4b9ef52-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020091743s May 24 11:37:36.414: INFO: Pod "pod-secrets-f4b9ef52-9db2-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027877017s STEP: Saw pod success May 24 11:37:36.414: INFO: Pod "pod-secrets-f4b9ef52-9db2-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:37:36.417: INFO: Trying to get logs from node hunter-worker pod pod-secrets-f4b9ef52-9db2-11ea-9618-0242ac110016 container secret-volume-test: STEP: delete the pod May 24 11:37:36.444: INFO: Waiting for pod pod-secrets-f4b9ef52-9db2-11ea-9618-0242ac110016 to disappear May 24 11:37:36.450: INFO: Pod pod-secrets-f4b9ef52-9db2-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:37:36.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8jtcv" for this suite. May 24 11:37:42.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:37:42.501: INFO: namespace: e2e-tests-secrets-8jtcv, resource: bindings, ignored listing per whitelist May 24 11:37:42.543: INFO: namespace e2e-tests-secrets-8jtcv deletion completed in 6.089405504s STEP: Destroying namespace "e2e-tests-secret-namespace-rfphc" for this suite. May 24 11:37:48.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:37:48.662: INFO: namespace: e2e-tests-secret-namespace-rfphc, resource: bindings, ignored listing per whitelist May 24 11:37:48.678: INFO: namespace e2e-tests-secret-namespace-rfphc deletion completed in 6.134490551s • [SLOW TEST:16.507 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:37:48.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 24 11:37:48.817: INFO: Waiting up to 5m0s for pod "pod-fe8342ef-9db2-11ea-9618-0242ac110016" in namespace "e2e-tests-emptydir-sdw9l" to be "success or failure" May 24 11:37:48.822: INFO: Pod "pod-fe8342ef-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 5.277179ms May 24 11:37:50.827: INFO: Pod "pod-fe8342ef-9db2-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009645076s May 24 11:37:52.832: INFO: Pod "pod-fe8342ef-9db2-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014551136s STEP: Saw pod success May 24 11:37:52.832: INFO: Pod "pod-fe8342ef-9db2-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:37:52.835: INFO: Trying to get logs from node hunter-worker2 pod pod-fe8342ef-9db2-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 11:37:52.854: INFO: Waiting for pod pod-fe8342ef-9db2-11ea-9618-0242ac110016 to disappear May 24 11:37:52.859: INFO: Pod pod-fe8342ef-9db2-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:37:52.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sdw9l" for this suite. May 24 11:37:58.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:37:58.907: INFO: namespace: e2e-tests-emptydir-sdw9l, resource: bindings, ignored listing per whitelist May 24 11:37:58.976: INFO: namespace e2e-tests-emptydir-sdw9l deletion completed in 6.113019366s • [SLOW TEST:10.298 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:37:58.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-04a6a9f6-9db3-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 11:37:59.117: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-04a8d2c1-9db3-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-dl9mb" to be "success or failure" May 24 11:37:59.121: INFO: Pod "pod-projected-configmaps-04a8d2c1-9db3-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.671833ms May 24 11:38:01.125: INFO: Pod "pod-projected-configmaps-04a8d2c1-9db3-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007867629s May 24 11:38:03.139: INFO: Pod "pod-projected-configmaps-04a8d2c1-9db3-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021906732s STEP: Saw pod success May 24 11:38:03.140: INFO: Pod "pod-projected-configmaps-04a8d2c1-9db3-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:38:03.144: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-04a8d2c1-9db3-11ea-9618-0242ac110016 container projected-configmap-volume-test: STEP: delete the pod May 24 11:38:03.164: INFO: Waiting for pod pod-projected-configmaps-04a8d2c1-9db3-11ea-9618-0242ac110016 to disappear May 24 11:38:03.193: INFO: Pod pod-projected-configmaps-04a8d2c1-9db3-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:38:03.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dl9mb" for this suite. May 24 11:38:09.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:38:09.225: INFO: namespace: e2e-tests-projected-dl9mb, resource: bindings, ignored listing per whitelist May 24 11:38:09.264: INFO: namespace e2e-tests-projected-dl9mb deletion completed in 6.067812275s • [SLOW TEST:10.288 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:38:09.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-0ac64e0a-9db3-11ea-9618-0242ac110016 STEP: Creating secret with name s-test-opt-upd-0ac64e80-9db3-11ea-9618-0242ac110016 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0ac64e0a-9db3-11ea-9618-0242ac110016 STEP: Updating secret s-test-opt-upd-0ac64e80-9db3-11ea-9618-0242ac110016 STEP: Creating secret with name s-test-opt-create-0ac64eae-9db3-11ea-9618-0242ac110016 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:39:47.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-gfsht" for this suite. May 24 11:40:09.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:40:10.013: INFO: namespace: e2e-tests-secrets-gfsht, resource: bindings, ignored listing per whitelist May 24 11:40:10.070: INFO: namespace e2e-tests-secrets-gfsht deletion completed in 22.092477383s • [SLOW TEST:120.805 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:40:10.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 24 11:40:10.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-g5x2l' May 24 11:40:10.253: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 24 11:40:10.253: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 24 11:40:14.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-g5x2l' May 24 11:40:14.375: INFO: stderr: "" May 24 11:40:14.375: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:40:14.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-g5x2l" for this suite. May 24 11:40:34.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:40:34.462: INFO: namespace: e2e-tests-kubectl-g5x2l, resource: bindings, ignored listing per whitelist May 24 11:40:34.515: INFO: namespace e2e-tests-kubectl-g5x2l deletion completed in 20.13115689s • [SLOW TEST:24.445 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:40:34.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-615742c5-9db3-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 11:40:34.624: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6157cd3c-9db3-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-z6tqp" to be "success or failure" May 24 11:40:34.639: INFO: Pod "pod-projected-secrets-6157cd3c-9db3-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 14.41682ms May 24 11:40:36.643: INFO: Pod "pod-projected-secrets-6157cd3c-9db3-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018541994s May 24 11:40:38.648: INFO: Pod "pod-projected-secrets-6157cd3c-9db3-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023429656s STEP: Saw pod success May 24 11:40:38.648: INFO: Pod "pod-projected-secrets-6157cd3c-9db3-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:40:38.651: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-6157cd3c-9db3-11ea-9618-0242ac110016 container projected-secret-volume-test: STEP: delete the pod May 24 11:40:38.670: INFO: Waiting for pod pod-projected-secrets-6157cd3c-9db3-11ea-9618-0242ac110016 to disappear May 24 11:40:38.674: INFO: Pod pod-projected-secrets-6157cd3c-9db3-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:40:38.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z6tqp" for this suite. May 24 11:40:44.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:40:44.729: INFO: namespace: e2e-tests-projected-z6tqp, resource: bindings, ignored listing per whitelist May 24 11:40:44.759: INFO: namespace e2e-tests-projected-z6tqp deletion completed in 6.080942048s • [SLOW TEST:10.244 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:40:44.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 24 11:40:51.887: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:40:52.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-s6qb6" for this suite. May 24 11:41:14.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:41:14.958: INFO: namespace: e2e-tests-replicaset-s6qb6, resource: bindings, ignored listing per whitelist May 24 11:41:14.998: INFO: namespace e2e-tests-replicaset-s6qb6 deletion completed in 22.090435924s • [SLOW TEST:30.239 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:41:14.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 11:41:15.128: INFO: Creating ReplicaSet my-hostname-basic-797ee569-9db3-11ea-9618-0242ac110016 May 24 11:41:15.174: INFO: Pod name my-hostname-basic-797ee569-9db3-11ea-9618-0242ac110016: Found 0 pods out of 1 May 24 11:41:20.179: INFO: Pod name my-hostname-basic-797ee569-9db3-11ea-9618-0242ac110016: Found 1 pods out of 1 May 24 11:41:20.179: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-797ee569-9db3-11ea-9618-0242ac110016" is running May 24 11:41:20.183: INFO: Pod "my-hostname-basic-797ee569-9db3-11ea-9618-0242ac110016-mxhsd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 11:41:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 11:41:17 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 11:41:17 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 11:41:15 +0000 UTC Reason: Message:}]) May 24 11:41:20.183: INFO: Trying to dial the pod May 24 11:41:25.196: INFO: Controller my-hostname-basic-797ee569-9db3-11ea-9618-0242ac110016: Got expected result from replica 1 [my-hostname-basic-797ee569-9db3-11ea-9618-0242ac110016-mxhsd]: "my-hostname-basic-797ee569-9db3-11ea-9618-0242ac110016-mxhsd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:41:25.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-5q2jk" for this suite. May 24 11:41:31.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:41:31.316: INFO: namespace: e2e-tests-replicaset-5q2jk, resource: bindings, ignored listing per whitelist May 24 11:41:31.367: INFO: namespace e2e-tests-replicaset-5q2jk deletion completed in 6.117401629s • [SLOW TEST:16.370 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:41:31.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-833d50ff-9db3-11ea-9618-0242ac110016 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-833d50ff-9db3-11ea-9618-0242ac110016 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:41:39.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-smtg4" for this suite. May 24 11:42:01.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:42:01.650: INFO: namespace: e2e-tests-configmap-smtg4, resource: bindings, ignored listing per whitelist May 24 11:42:01.671: INFO: namespace e2e-tests-configmap-smtg4 deletion completed in 22.131875354s • [SLOW TEST:30.304 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:42:01.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 11:42:01.771: INFO: Waiting up to 5m0s for pod "downwardapi-volume-954afd22-9db3-11ea-9618-0242ac110016" in namespace "e2e-tests-downward-api-x8gbn" to be "success or failure" May 24 11:42:01.787: INFO: Pod "downwardapi-volume-954afd22-9db3-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 16.124882ms May 24 11:42:03.790: INFO: Pod "downwardapi-volume-954afd22-9db3-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019483799s May 24 11:42:05.795: INFO: Pod "downwardapi-volume-954afd22-9db3-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024212725s STEP: Saw pod success May 24 11:42:05.795: INFO: Pod "downwardapi-volume-954afd22-9db3-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:42:05.798: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-954afd22-9db3-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 11:42:05.832: INFO: Waiting for pod downwardapi-volume-954afd22-9db3-11ea-9618-0242ac110016 to disappear May 24 11:42:05.847: INFO: Pod downwardapi-volume-954afd22-9db3-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:42:05.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-x8gbn" for this suite. May 24 11:42:11.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:42:11.939: INFO: namespace: e2e-tests-downward-api-x8gbn, resource: bindings, ignored listing per whitelist May 24 11:42:11.968: INFO: namespace e2e-tests-downward-api-x8gbn deletion completed in 6.118139829s • [SLOW TEST:10.297 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:42:11.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 24 11:42:12.096: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 11:42:12.113: INFO: Waiting for terminating namespaces to be deleted... May 24 11:42:12.116: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 24 11:42:12.121: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 24 11:42:12.121: INFO: Container kube-proxy ready: true, restart count 0 May 24 11:42:12.121: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 24 11:42:12.121: INFO: Container kindnet-cni ready: true, restart count 0 May 24 11:42:12.121: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 24 11:42:12.121: INFO: Container coredns ready: true, restart count 0 May 24 11:42:12.121: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 24 11:42:12.127: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 24 11:42:12.127: INFO: Container kindnet-cni ready: true, restart count 0 May 24 11:42:12.127: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 24 11:42:12.127: INFO: Container coredns ready: true, restart count 0 May 24 11:42:12.127: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 24 11:42:12.127: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-9de1d363-9db3-11ea-9618-0242ac110016 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-9de1d363-9db3-11ea-9618-0242ac110016 off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-9de1d363-9db3-11ea-9618-0242ac110016 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:42:20.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-nkmlf" for this suite. May 24 11:42:28.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:42:28.368: INFO: namespace: e2e-tests-sched-pred-nkmlf, resource: bindings, ignored listing per whitelist May 24 11:42:28.399: INFO: namespace e2e-tests-sched-pred-nkmlf deletion completed in 8.132489584s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:16.431 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:42:28.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-a53bb39f-9db3-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 11:42:28.526: INFO: Waiting up to 5m0s for pod "pod-secrets-a53d5410-9db3-11ea-9618-0242ac110016" in namespace "e2e-tests-secrets-snt22" to be "success or failure" May 24 11:42:28.530: INFO: Pod "pod-secrets-a53d5410-9db3-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.695624ms May 24 11:42:30.533: INFO: Pod "pod-secrets-a53d5410-9db3-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007173242s May 24 11:42:32.538: INFO: Pod "pod-secrets-a53d5410-9db3-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011397972s STEP: Saw pod success May 24 11:42:32.538: INFO: Pod "pod-secrets-a53d5410-9db3-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:42:32.540: INFO: Trying to get logs from node hunter-worker pod pod-secrets-a53d5410-9db3-11ea-9618-0242ac110016 container secret-volume-test: STEP: delete the pod May 24 11:42:32.599: INFO: Waiting for pod pod-secrets-a53d5410-9db3-11ea-9618-0242ac110016 to disappear May 24 11:42:32.614: INFO: Pod pod-secrets-a53d5410-9db3-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:42:32.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-snt22" for this suite. May 24 11:42:38.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:42:38.668: INFO: namespace: e2e-tests-secrets-snt22, resource: bindings, ignored listing per whitelist May 24 11:42:38.713: INFO: namespace e2e-tests-secrets-snt22 deletion completed in 6.095552715s • [SLOW TEST:10.314 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:42:38.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-qqgrq [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-qqgrq STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-qqgrq May 24 11:42:38.844: INFO: Found 0 stateful pods, waiting for 1 May 24 11:42:48.848: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 24 11:42:48.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 24 11:42:49.190: INFO: stderr: "I0524 11:42:48.997666 2979 log.go:172] (0xc00014c790) (0xc000135400) Create stream\nI0524 11:42:48.997754 2979 log.go:172] (0xc00014c790) (0xc000135400) Stream added, broadcasting: 1\nI0524 11:42:49.000333 2979 log.go:172] (0xc00014c790) Reply frame received for 1\nI0524 11:42:49.000398 2979 log.go:172] (0xc00014c790) (0xc000386000) Create stream\nI0524 11:42:49.000424 2979 log.go:172] (0xc00014c790) (0xc000386000) Stream added, broadcasting: 3\nI0524 11:42:49.001640 2979 log.go:172] (0xc00014c790) Reply frame received for 3\nI0524 11:42:49.001680 2979 log.go:172] (0xc00014c790) (0xc0003860a0) Create stream\nI0524 11:42:49.001692 2979 log.go:172] (0xc00014c790) (0xc0003860a0) Stream added, broadcasting: 5\nI0524 11:42:49.002554 2979 log.go:172] (0xc00014c790) Reply frame received for 5\nI0524 11:42:49.181936 2979 log.go:172] (0xc00014c790) Data frame received for 3\nI0524 11:42:49.181993 2979 log.go:172] (0xc000386000) (3) Data frame handling\nI0524 11:42:49.182161 2979 log.go:172] (0xc000386000) (3) Data frame sent\nI0524 11:42:49.182202 2979 log.go:172] (0xc00014c790) Data frame received for 3\nI0524 11:42:49.182236 2979 log.go:172] (0xc000386000) (3) Data frame handling\nI0524 11:42:49.182281 2979 log.go:172] (0xc00014c790) Data frame received for 5\nI0524 11:42:49.182375 2979 log.go:172] (0xc0003860a0) (5) Data frame handling\nI0524 11:42:49.184100 2979 log.go:172] (0xc00014c790) Data frame received for 1\nI0524 11:42:49.184116 2979 log.go:172] (0xc000135400) (1) Data frame handling\nI0524 11:42:49.184127 2979 log.go:172] (0xc000135400) (1) Data frame sent\nI0524 11:42:49.184359 2979 log.go:172] (0xc00014c790) (0xc000135400) Stream removed, broadcasting: 1\nI0524 11:42:49.184411 2979 log.go:172] (0xc00014c790) Go away received\nI0524 11:42:49.184734 2979 log.go:172] (0xc00014c790) (0xc000135400) Stream removed, broadcasting: 1\nI0524 11:42:49.184765 2979 log.go:172] (0xc00014c790) (0xc000386000) Stream removed, broadcasting: 3\nI0524 11:42:49.184785 2979 log.go:172] (0xc00014c790) (0xc0003860a0) Stream removed, broadcasting: 5\n" May 24 11:42:49.190: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 24 11:42:49.190: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 24 11:42:49.193: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 24 11:42:59.198: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 11:42:59.198: INFO: Waiting for statefulset status.replicas updated to 0 May 24 11:42:59.214: INFO: POD NODE PHASE GRACE CONDITIONS May 24 11:42:59.214: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:38 +0000 UTC }] May 24 11:42:59.214: INFO: May 24 11:42:59.214: INFO: StatefulSet ss has not reached scale 3, at 1 May 24 11:43:00.219: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995617408s May 24 11:43:01.227: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991055489s May 24 11:43:02.317: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982692409s May 24 11:43:03.322: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.892448759s May 24 11:43:04.327: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.887172209s May 24 11:43:05.332: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.88233717s May 24 11:43:06.338: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.87699964s May 24 11:43:07.343: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.871924623s May 24 11:43:08.362: INFO: Verifying statefulset ss doesn't scale past 3 for another 866.712053ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-qqgrq May 24 11:43:09.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:43:09.598: INFO: stderr: "I0524 11:43:09.499626 3001 log.go:172] (0xc000138630) (0xc000698640) Create stream\nI0524 11:43:09.499691 3001 log.go:172] (0xc000138630) (0xc000698640) Stream added, broadcasting: 1\nI0524 11:43:09.502304 3001 log.go:172] (0xc000138630) Reply frame received for 1\nI0524 11:43:09.502357 3001 log.go:172] (0xc000138630) (0xc0006986e0) Create stream\nI0524 11:43:09.502370 3001 log.go:172] (0xc000138630) (0xc0006986e0) Stream added, broadcasting: 3\nI0524 11:43:09.503288 3001 log.go:172] (0xc000138630) Reply frame received for 3\nI0524 11:43:09.503325 3001 log.go:172] (0xc000138630) (0xc00036ed20) Create stream\nI0524 11:43:09.503343 3001 log.go:172] (0xc000138630) (0xc00036ed20) Stream added, broadcasting: 5\nI0524 11:43:09.504309 3001 log.go:172] (0xc000138630) Reply frame received for 5\nI0524 11:43:09.590369 3001 log.go:172] (0xc000138630) Data frame received for 5\nI0524 11:43:09.590433 3001 log.go:172] (0xc00036ed20) (5) Data frame handling\nI0524 11:43:09.590475 3001 log.go:172] (0xc000138630) Data frame received for 3\nI0524 11:43:09.590500 3001 log.go:172] (0xc0006986e0) (3) Data frame handling\nI0524 11:43:09.590529 3001 log.go:172] (0xc0006986e0) (3) Data frame sent\nI0524 11:43:09.590553 3001 log.go:172] (0xc000138630) Data frame received for 3\nI0524 11:43:09.590567 3001 log.go:172] (0xc0006986e0) (3) Data frame handling\nI0524 11:43:09.591974 3001 log.go:172] (0xc000138630) Data frame received for 1\nI0524 11:43:09.592019 3001 log.go:172] (0xc000698640) (1) Data frame handling\nI0524 11:43:09.592050 3001 log.go:172] (0xc000698640) (1) Data frame sent\nI0524 11:43:09.592072 3001 log.go:172] (0xc000138630) (0xc000698640) Stream removed, broadcasting: 1\nI0524 11:43:09.592105 3001 log.go:172] (0xc000138630) Go away received\nI0524 11:43:09.592330 3001 log.go:172] (0xc000138630) (0xc000698640) Stream removed, broadcasting: 1\nI0524 11:43:09.592359 3001 log.go:172] (0xc000138630) (0xc0006986e0) Stream removed, broadcasting: 3\nI0524 11:43:09.592372 3001 log.go:172] (0xc000138630) (0xc00036ed20) Stream removed, broadcasting: 5\n" May 24 11:43:09.598: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 24 11:43:09.598: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 24 11:43:09.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:43:09.816: INFO: stderr: "I0524 11:43:09.733606 3024 log.go:172] (0xc000138790) (0xc0005db360) Create stream\nI0524 11:43:09.733673 3024 log.go:172] (0xc000138790) (0xc0005db360) Stream added, broadcasting: 1\nI0524 11:43:09.736415 3024 log.go:172] (0xc000138790) Reply frame received for 1\nI0524 11:43:09.736448 3024 log.go:172] (0xc000138790) (0xc0004dc000) Create stream\nI0524 11:43:09.736457 3024 log.go:172] (0xc000138790) (0xc0004dc000) Stream added, broadcasting: 3\nI0524 11:43:09.737578 3024 log.go:172] (0xc000138790) Reply frame received for 3\nI0524 11:43:09.737616 3024 log.go:172] (0xc000138790) (0xc00011c000) Create stream\nI0524 11:43:09.737638 3024 log.go:172] (0xc000138790) (0xc00011c000) Stream added, broadcasting: 5\nI0524 11:43:09.738490 3024 log.go:172] (0xc000138790) Reply frame received for 5\nI0524 11:43:09.810386 3024 log.go:172] (0xc000138790) Data frame received for 5\nI0524 11:43:09.810408 3024 log.go:172] (0xc00011c000) (5) Data frame handling\nI0524 11:43:09.810415 3024 log.go:172] (0xc00011c000) (5) Data frame sent\nI0524 11:43:09.810420 3024 log.go:172] (0xc000138790) Data frame received for 5\nI0524 11:43:09.810424 3024 log.go:172] (0xc00011c000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0524 11:43:09.810441 3024 log.go:172] (0xc000138790) Data frame received for 3\nI0524 11:43:09.810447 3024 log.go:172] (0xc0004dc000) (3) Data frame handling\nI0524 11:43:09.810452 3024 log.go:172] (0xc0004dc000) (3) Data frame sent\nI0524 11:43:09.810456 3024 log.go:172] (0xc000138790) Data frame received for 3\nI0524 11:43:09.810459 3024 log.go:172] (0xc0004dc000) (3) Data frame handling\nI0524 11:43:09.812175 3024 log.go:172] (0xc000138790) Data frame received for 1\nI0524 11:43:09.812197 3024 log.go:172] (0xc0005db360) (1) Data frame handling\nI0524 11:43:09.812204 3024 log.go:172] (0xc0005db360) (1) Data frame sent\nI0524 11:43:09.812212 3024 log.go:172] (0xc000138790) (0xc0005db360) Stream removed, broadcasting: 1\nI0524 11:43:09.812225 3024 log.go:172] (0xc000138790) Go away received\nI0524 11:43:09.812446 3024 log.go:172] (0xc000138790) (0xc0005db360) Stream removed, broadcasting: 1\nI0524 11:43:09.812460 3024 log.go:172] (0xc000138790) (0xc0004dc000) Stream removed, broadcasting: 3\nI0524 11:43:09.812466 3024 log.go:172] (0xc000138790) (0xc00011c000) Stream removed, broadcasting: 5\n" May 24 11:43:09.816: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 24 11:43:09.816: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 24 11:43:09.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:43:10.028: INFO: stderr: "I0524 11:43:09.949462 3047 log.go:172] (0xc0007c42c0) (0xc00072e640) Create stream\nI0524 11:43:09.949529 3047 log.go:172] (0xc0007c42c0) (0xc00072e640) Stream added, broadcasting: 1\nI0524 11:43:09.951742 3047 log.go:172] (0xc0007c42c0) Reply frame received for 1\nI0524 11:43:09.951791 3047 log.go:172] (0xc0007c42c0) (0xc00032ac80) Create stream\nI0524 11:43:09.951815 3047 log.go:172] (0xc0007c42c0) (0xc00032ac80) Stream added, broadcasting: 3\nI0524 11:43:09.952726 3047 log.go:172] (0xc0007c42c0) Reply frame received for 3\nI0524 11:43:09.952761 3047 log.go:172] (0xc0007c42c0) (0xc0003e2000) Create stream\nI0524 11:43:09.952773 3047 log.go:172] (0xc0007c42c0) (0xc0003e2000) Stream added, broadcasting: 5\nI0524 11:43:09.953671 3047 log.go:172] (0xc0007c42c0) Reply frame received for 5\nI0524 11:43:10.022696 3047 log.go:172] (0xc0007c42c0) Data frame received for 3\nI0524 11:43:10.022747 3047 log.go:172] (0xc00032ac80) (3) Data frame handling\nI0524 11:43:10.022774 3047 log.go:172] (0xc00032ac80) (3) Data frame sent\nI0524 11:43:10.022793 3047 log.go:172] (0xc0007c42c0) Data frame received for 3\nI0524 11:43:10.022811 3047 log.go:172] (0xc00032ac80) (3) Data frame handling\nI0524 11:43:10.022986 3047 log.go:172] (0xc0007c42c0) Data frame received for 5\nI0524 11:43:10.023026 3047 log.go:172] (0xc0003e2000) (5) Data frame handling\nI0524 11:43:10.023066 3047 log.go:172] (0xc0003e2000) (5) Data frame sent\nI0524 11:43:10.023104 3047 log.go:172] (0xc0007c42c0) Data frame received for 5\nI0524 11:43:10.023116 3047 log.go:172] (0xc0003e2000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0524 11:43:10.024683 3047 log.go:172] (0xc0007c42c0) Data frame received for 1\nI0524 11:43:10.024712 3047 log.go:172] (0xc00072e640) (1) Data frame handling\nI0524 11:43:10.024726 3047 log.go:172] (0xc00072e640) (1) Data frame sent\nI0524 11:43:10.024735 3047 log.go:172] (0xc0007c42c0) (0xc00072e640) Stream removed, broadcasting: 1\nI0524 11:43:10.024752 3047 log.go:172] (0xc0007c42c0) Go away received\nI0524 11:43:10.025009 3047 log.go:172] (0xc0007c42c0) (0xc00072e640) Stream removed, broadcasting: 1\nI0524 11:43:10.025031 3047 log.go:172] (0xc0007c42c0) (0xc00032ac80) Stream removed, broadcasting: 3\nI0524 11:43:10.025041 3047 log.go:172] (0xc0007c42c0) (0xc0003e2000) Stream removed, broadcasting: 5\n" May 24 11:43:10.028: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 24 11:43:10.028: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 24 11:43:10.038: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 24 11:43:20.043: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 24 11:43:20.043: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 24 11:43:20.043: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 24 11:43:20.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 24 11:43:20.281: INFO: stderr: "I0524 11:43:20.186894 3070 log.go:172] (0xc00013a160) (0xc000700640) Create stream\nI0524 11:43:20.186976 3070 log.go:172] (0xc00013a160) (0xc000700640) Stream added, broadcasting: 1\nI0524 11:43:20.189251 3070 log.go:172] (0xc00013a160) Reply frame received for 1\nI0524 11:43:20.189296 3070 log.go:172] (0xc00013a160) (0xc000126c80) Create stream\nI0524 11:43:20.189305 3070 log.go:172] (0xc00013a160) (0xc000126c80) Stream added, broadcasting: 3\nI0524 11:43:20.190331 3070 log.go:172] (0xc00013a160) Reply frame received for 3\nI0524 11:43:20.190404 3070 log.go:172] (0xc00013a160) (0xc000518000) Create stream\nI0524 11:43:20.190437 3070 log.go:172] (0xc00013a160) (0xc000518000) Stream added, broadcasting: 5\nI0524 11:43:20.191390 3070 log.go:172] (0xc00013a160) Reply frame received for 5\nI0524 11:43:20.275428 3070 log.go:172] (0xc00013a160) Data frame received for 5\nI0524 11:43:20.275463 3070 log.go:172] (0xc000518000) (5) Data frame handling\nI0524 11:43:20.275492 3070 log.go:172] (0xc00013a160) Data frame received for 3\nI0524 11:43:20.275515 3070 log.go:172] (0xc000126c80) (3) Data frame handling\nI0524 11:43:20.275528 3070 log.go:172] (0xc000126c80) (3) Data frame sent\nI0524 11:43:20.275534 3070 log.go:172] (0xc00013a160) Data frame received for 3\nI0524 11:43:20.275539 3070 log.go:172] (0xc000126c80) (3) Data frame handling\nI0524 11:43:20.276892 3070 log.go:172] (0xc00013a160) Data frame received for 1\nI0524 11:43:20.276921 3070 log.go:172] (0xc000700640) (1) Data frame handling\nI0524 11:43:20.276948 3070 log.go:172] (0xc000700640) (1) Data frame sent\nI0524 11:43:20.276982 3070 log.go:172] (0xc00013a160) (0xc000700640) Stream removed, broadcasting: 1\nI0524 11:43:20.276999 3070 log.go:172] (0xc00013a160) Go away received\nI0524 11:43:20.277330 3070 log.go:172] (0xc00013a160) (0xc000700640) Stream removed, broadcasting: 1\nI0524 11:43:20.277353 3070 log.go:172] (0xc00013a160) (0xc000126c80) Stream removed, broadcasting: 3\nI0524 11:43:20.277364 3070 log.go:172] (0xc00013a160) (0xc000518000) Stream removed, broadcasting: 5\n" May 24 11:43:20.281: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 24 11:43:20.281: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 24 11:43:20.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 24 11:43:20.602: INFO: stderr: "I0524 11:43:20.474087 3093 log.go:172] (0xc000154840) (0xc0006754a0) Create stream\nI0524 11:43:20.474154 3093 log.go:172] (0xc000154840) (0xc0006754a0) Stream added, broadcasting: 1\nI0524 11:43:20.476448 3093 log.go:172] (0xc000154840) Reply frame received for 1\nI0524 11:43:20.476492 3093 log.go:172] (0xc000154840) (0xc0005d4000) Create stream\nI0524 11:43:20.476501 3093 log.go:172] (0xc000154840) (0xc0005d4000) Stream added, broadcasting: 3\nI0524 11:43:20.477716 3093 log.go:172] (0xc000154840) Reply frame received for 3\nI0524 11:43:20.477785 3093 log.go:172] (0xc000154840) (0xc0005e0000) Create stream\nI0524 11:43:20.477806 3093 log.go:172] (0xc000154840) (0xc0005e0000) Stream added, broadcasting: 5\nI0524 11:43:20.478594 3093 log.go:172] (0xc000154840) Reply frame received for 5\nI0524 11:43:20.595252 3093 log.go:172] (0xc000154840) Data frame received for 5\nI0524 11:43:20.595292 3093 log.go:172] (0xc000154840) Data frame received for 3\nI0524 11:43:20.595330 3093 log.go:172] (0xc0005d4000) (3) Data frame handling\nI0524 11:43:20.595347 3093 log.go:172] (0xc0005d4000) (3) Data frame sent\nI0524 11:43:20.595366 3093 log.go:172] (0xc000154840) Data frame received for 3\nI0524 11:43:20.595375 3093 log.go:172] (0xc0005d4000) (3) Data frame handling\nI0524 11:43:20.595440 3093 log.go:172] (0xc0005e0000) (5) Data frame handling\nI0524 11:43:20.598167 3093 log.go:172] (0xc000154840) Data frame received for 1\nI0524 11:43:20.598194 3093 log.go:172] (0xc0006754a0) (1) Data frame handling\nI0524 11:43:20.598208 3093 log.go:172] (0xc0006754a0) (1) Data frame sent\nI0524 11:43:20.598229 3093 log.go:172] (0xc000154840) (0xc0006754a0) Stream removed, broadcasting: 1\nI0524 11:43:20.598258 3093 log.go:172] (0xc000154840) Go away received\nI0524 11:43:20.598554 3093 log.go:172] (0xc000154840) (0xc0006754a0) Stream removed, broadcasting: 1\nI0524 11:43:20.598579 3093 log.go:172] (0xc000154840) (0xc0005d4000) Stream removed, broadcasting: 3\nI0524 11:43:20.598590 3093 log.go:172] (0xc000154840) (0xc0005e0000) Stream removed, broadcasting: 5\n" May 24 11:43:20.602: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 24 11:43:20.602: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 24 11:43:20.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 24 11:43:20.844: INFO: stderr: "I0524 11:43:20.730753 3114 log.go:172] (0xc000138790) (0xc00075e640) Create stream\nI0524 11:43:20.730814 3114 log.go:172] (0xc000138790) (0xc00075e640) Stream added, broadcasting: 1\nI0524 11:43:20.732928 3114 log.go:172] (0xc000138790) Reply frame received for 1\nI0524 11:43:20.732968 3114 log.go:172] (0xc000138790) (0xc0005acc80) Create stream\nI0524 11:43:20.732982 3114 log.go:172] (0xc000138790) (0xc0005acc80) Stream added, broadcasting: 3\nI0524 11:43:20.734125 3114 log.go:172] (0xc000138790) Reply frame received for 3\nI0524 11:43:20.734169 3114 log.go:172] (0xc000138790) (0xc000626000) Create stream\nI0524 11:43:20.734184 3114 log.go:172] (0xc000138790) (0xc000626000) Stream added, broadcasting: 5\nI0524 11:43:20.735035 3114 log.go:172] (0xc000138790) Reply frame received for 5\nI0524 11:43:20.835993 3114 log.go:172] (0xc000138790) Data frame received for 5\nI0524 11:43:20.836046 3114 log.go:172] (0xc000626000) (5) Data frame handling\nI0524 11:43:20.836084 3114 log.go:172] (0xc000138790) Data frame received for 3\nI0524 11:43:20.836100 3114 log.go:172] (0xc0005acc80) (3) Data frame handling\nI0524 11:43:20.836128 3114 log.go:172] (0xc0005acc80) (3) Data frame sent\nI0524 11:43:20.836150 3114 log.go:172] (0xc000138790) Data frame received for 3\nI0524 11:43:20.836165 3114 log.go:172] (0xc0005acc80) (3) Data frame handling\nI0524 11:43:20.839292 3114 log.go:172] (0xc000138790) Data frame received for 1\nI0524 11:43:20.839318 3114 log.go:172] (0xc00075e640) (1) Data frame handling\nI0524 11:43:20.839339 3114 log.go:172] (0xc00075e640) (1) Data frame sent\nI0524 11:43:20.839421 3114 log.go:172] (0xc000138790) (0xc00075e640) Stream removed, broadcasting: 1\nI0524 11:43:20.839670 3114 log.go:172] (0xc000138790) (0xc00075e640) Stream removed, broadcasting: 1\nI0524 11:43:20.839688 3114 log.go:172] (0xc000138790) (0xc0005acc80) Stream removed, broadcasting: 3\nI0524 11:43:20.839901 3114 log.go:172] (0xc000138790) (0xc000626000) Stream removed, broadcasting: 5\n" May 24 11:43:20.844: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 24 11:43:20.844: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 24 11:43:20.844: INFO: Waiting for statefulset status.replicas updated to 0 May 24 11:43:20.847: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 24 11:43:30.855: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 11:43:30.855: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 24 11:43:30.855: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 24 11:43:30.868: INFO: POD NODE PHASE GRACE CONDITIONS May 24 11:43:30.868: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:38 +0000 UTC }] May 24 11:43:30.868: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC }] May 24 11:43:30.868: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC }] May 24 11:43:30.868: INFO: May 24 11:43:30.868: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 11:43:31.885: INFO: POD NODE PHASE GRACE CONDITIONS May 24 11:43:31.885: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:38 +0000 UTC }] May 24 11:43:31.885: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC }] May 24 11:43:31.885: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC }] May 24 11:43:31.885: INFO: May 24 11:43:31.885: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 11:43:32.890: INFO: POD NODE PHASE GRACE CONDITIONS May 24 11:43:32.890: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:38 +0000 UTC }] May 24 11:43:32.890: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC }] May 24 11:43:32.890: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC }] May 24 11:43:32.890: INFO: May 24 11:43:32.890: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 11:43:33.895: INFO: POD NODE PHASE GRACE CONDITIONS May 24 11:43:33.895: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:38 +0000 UTC }] May 24 11:43:33.895: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC }] May 24 11:43:33.895: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC }] May 24 11:43:33.895: INFO: May 24 11:43:33.895: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 11:43:34.899: INFO: POD NODE PHASE GRACE CONDITIONS May 24 11:43:34.900: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:38 +0000 UTC }] May 24 11:43:34.900: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC }] May 24 11:43:34.900: INFO: May 24 11:43:34.900: INFO: StatefulSet ss has not reached scale 0, at 2 May 24 11:43:35.904: INFO: POD NODE PHASE GRACE CONDITIONS May 24 11:43:35.904: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:38 +0000 UTC }] May 24 11:43:35.904: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC }] May 24 11:43:35.904: INFO: May 24 11:43:35.904: INFO: StatefulSet ss has not reached scale 0, at 2 May 24 11:43:36.909: INFO: POD NODE PHASE GRACE CONDITIONS May 24 11:43:36.909: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:38 +0000 UTC }] May 24 11:43:36.909: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC }] May 24 11:43:36.909: INFO: May 24 11:43:36.909: INFO: StatefulSet ss has not reached scale 0, at 2 May 24 11:43:37.914: INFO: POD NODE PHASE GRACE CONDITIONS May 24 11:43:37.914: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:38 +0000 UTC }] May 24 11:43:37.915: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC }] May 24 11:43:37.915: INFO: May 24 11:43:37.915: INFO: StatefulSet ss has not reached scale 0, at 2 May 24 11:43:38.919: INFO: POD NODE PHASE GRACE CONDITIONS May 24 11:43:38.919: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:38 +0000 UTC }] May 24 11:43:38.919: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC }] May 24 11:43:38.919: INFO: May 24 11:43:38.919: INFO: StatefulSet ss has not reached scale 0, at 2 May 24 11:43:39.924: INFO: POD NODE PHASE GRACE CONDITIONS May 24 11:43:39.924: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:38 +0000 UTC }] May 24 11:43:39.924: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:43:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:42:59 +0000 UTC }] May 24 11:43:39.924: INFO: May 24 11:43:39.924: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-qqgrq May 24 11:43:40.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:43:41.067: INFO: rc: 1 May 24 11:43:41.067: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001898c90 exit status 1 true [0xc0007676d8 0xc000767730 0xc000767798] [0xc0007676d8 0xc000767730 0xc000767798] [0xc000767708 0xc000767780] [0x935700 0x935700] 0xc001ec2780 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 24 11:43:51.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:43:51.166: INFO: rc: 1 May 24 11:43:51.167: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001898e40 exit status 1 true [0xc0007677c0 0xc000767800 0xc000767840] [0xc0007677c0 0xc000767800 0xc000767840] [0xc0007677e8 0xc000767810] [0x935700 0x935700] 0xc001ec2b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:44:01.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:44:01.265: INFO: rc: 1 May 24 11:44:01.265: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001898f90 exit status 1 true [0xc000767858 0xc0007678c8 0xc0007678e0] [0xc000767858 0xc0007678c8 0xc0007678e0] [0xc0007678a8 0xc0007678d8] [0x935700 0x935700] 0xc001ec2de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:44:11.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:44:11.361: INFO: rc: 1 May 24 11:44:11.361: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b1ae70 exit status 1 true [0xc000a7eeb8 0xc000a7eed0 0xc000a7ef08] [0xc000a7eeb8 0xc000a7eed0 0xc000a7ef08] [0xc000a7eec8 0xc000a7eef0] [0x935700 0x935700] 0xc0025d1740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:44:21.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:44:21.448: INFO: rc: 1 May 24 11:44:21.448: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018990e0 exit status 1 true [0xc0007678f0 0xc000767940 0xc000767978] [0xc0007678f0 0xc000767940 0xc000767978] [0xc000767920 0xc000767958] [0x935700 0x935700] 0xc001ec30e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:44:31.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:44:31.561: INFO: rc: 1 May 24 11:44:31.561: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001899200 exit status 1 true [0xc000767998 0xc000767a00 0xc000767a70] [0xc000767998 0xc000767a00 0xc000767a70] [0xc0007679f0 0xc000767a50] [0x935700 0x935700] 0xc001ec3380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:44:41.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:44:41.658: INFO: rc: 1 May 24 11:44:41.658: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013e8240 exit status 1 true [0xc00200c010 0xc00200c060 0xc00200c0b0] [0xc00200c010 0xc00200c060 0xc00200c0b0] [0xc00200c040 0xc00200c0a0] [0x935700 0x935700] 0xc001e56840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:44:51.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:44:51.762: INFO: rc: 1 May 24 11:44:51.762: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f8e0f0 exit status 1 true [0xc00016e000 0xc000a7e020 0xc000a7e060] [0xc00016e000 0xc000a7e020 0xc000a7e060] [0xc000a7e010 0xc000a7e040] [0x935700 0x935700] 0xc0013b82a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:45:01.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:45:01.863: INFO: rc: 1 May 24 11:45:01.863: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c40120 exit status 1 true [0xc000766038 0xc000766068 0xc0007661c0] [0xc000766038 0xc000766068 0xc0007661c0] [0xc000766060 0xc000766138] [0x935700 0x935700] 0xc00205a300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:45:11.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:45:11.957: INFO: rc: 1 May 24 11:45:11.957: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f8e270 exit status 1 true [0xc000a7e080 0xc000a7e100 0xc000a7e180] [0xc000a7e080 0xc000a7e100 0xc000a7e180] [0xc000a7e0d8 0xc000a7e160] [0x935700 0x935700] 0xc0013b8660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:45:21.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:45:22.049: INFO: rc: 1 May 24 11:45:22.049: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f70120 exit status 1 true [0xc000a08018 0xc000a08110 0xc000a081c8] [0xc000a08018 0xc000a08110 0xc000a081c8] [0xc000a080e8 0xc000a08198] [0x935700 0x935700] 0xc0024bdbc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:45:32.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:45:32.144: INFO: rc: 1 May 24 11:45:32.145: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002324120 exit status 1 true [0xc000444088 0xc0004441a8 0xc0004441d8] [0xc000444088 0xc0004441a8 0xc0004441d8] [0xc000444198 0xc0004441c8] [0x935700 0x935700] 0xc00219e480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:45:42.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:45:42.235: INFO: rc: 1 May 24 11:45:42.235: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f70270 exit status 1 true [0xc000a081e8 0xc000a08210 0xc000a08300] [0xc000a081e8 0xc000a08210 0xc000a08300] [0xc000a08208 0xc000a08278] [0x935700 0x935700] 0xc0024bde60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:45:52.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:45:52.331: INFO: rc: 1 May 24 11:45:52.331: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023242a0 exit status 1 true [0xc000444208 0xc0004442d0 0xc000444340] [0xc000444208 0xc0004442d0 0xc000444340] [0xc000444280 0xc000444330] [0x935700 0x935700] 0xc00219e900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:46:02.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:46:02.424: INFO: rc: 1 May 24 11:46:02.424: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023243c0 exit status 1 true [0xc000444358 0xc000444370 0xc0004443e8] [0xc000444358 0xc000444370 0xc0004443e8] [0xc000444368 0xc0004443b8] [0x935700 0x935700] 0xc00219ec00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:46:12.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:46:12.506: INFO: rc: 1 May 24 11:46:12.506: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023244e0 exit status 1 true [0xc0004443f0 0xc000444438 0xc000444478] [0xc0004443f0 0xc000444438 0xc000444478] [0xc000444418 0xc000444458] [0x935700 0x935700] 0xc00219ef00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:46:22.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:46:22.600: INFO: rc: 1 May 24 11:46:22.600: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f8e3c0 exit status 1 true [0xc000a7e190 0xc000a7e1c0 0xc000a7e1f0] [0xc000a7e190 0xc000a7e1c0 0xc000a7e1f0] [0xc000a7e1a8 0xc000a7e1e8] [0x935700 0x935700] 0xc0013b8960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:46:32.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:46:32.695: INFO: rc: 1 May 24 11:46:32.695: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c402d0 exit status 1 true [0xc000766290 0xc0007663d8 0xc000766488] [0xc000766290 0xc0007663d8 0xc000766488] [0xc000766380 0xc000766470] [0x935700 0x935700] 0xc00205a7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:46:42.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:46:42.790: INFO: rc: 1 May 24 11:46:42.790: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c40150 exit status 1 true [0xc000766038 0xc000766068 0xc0007661c0] [0xc000766038 0xc000766068 0xc0007661c0] [0xc000766060 0xc000766138] [0x935700 0x935700] 0xc00205a300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:46:52.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:46:52.885: INFO: rc: 1 May 24 11:46:52.885: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002324150 exit status 1 true [0xc000444088 0xc0004441a8 0xc0004441d8] [0xc000444088 0xc0004441a8 0xc0004441d8] [0xc000444198 0xc0004441c8] [0x935700 0x935700] 0xc00219e480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:47:02.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:47:02.981: INFO: rc: 1 May 24 11:47:02.982: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f8e150 exit status 1 true [0xc000a08018 0xc000a08110 0xc000a081c8] [0xc000a08018 0xc000a08110 0xc000a081c8] [0xc000a080e8 0xc000a08198] [0x935700 0x935700] 0xc0024bdbc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:47:12.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:47:13.083: INFO: rc: 1 May 24 11:47:13.083: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f8e2a0 exit status 1 true [0xc000a081e8 0xc000a08210 0xc000a08300] [0xc000a081e8 0xc000a08210 0xc000a08300] [0xc000a08208 0xc000a08278] [0x935700 0x935700] 0xc0024bde60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:47:23.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:47:23.175: INFO: rc: 1 May 24 11:47:23.175: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c40300 exit status 1 true [0xc000766290 0xc0007663d8 0xc000766488] [0xc000766290 0xc0007663d8 0xc000766488] [0xc000766380 0xc000766470] [0x935700 0x935700] 0xc00205a7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:47:33.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:47:33.263: INFO: rc: 1 May 24 11:47:33.263: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f8e3f0 exit status 1 true [0xc000a08308 0xc000a08350 0xc000a083f0] [0xc000a08308 0xc000a08350 0xc000a083f0] [0xc000a08338 0xc000a083d8] [0x935700 0x935700] 0xc0013b8120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:47:43.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:47:43.348: INFO: rc: 1 May 24 11:47:43.348: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f701b0 exit status 1 true [0xc000a7e008 0xc000a7e028 0xc000a7e080] [0xc000a7e008 0xc000a7e028 0xc000a7e080] [0xc000a7e020 0xc000a7e060] [0x935700 0x935700] 0xc0019103c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:47:53.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:47:53.443: INFO: rc: 1 May 24 11:47:53.443: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f8e540 exit status 1 true [0xc000a08430 0xc000a08460 0xc000a08488] [0xc000a08430 0xc000a08460 0xc000a08488] [0xc000a08450 0xc000a08478] [0x935700 0x935700] 0xc0013b85a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:48:03.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:48:03.536: INFO: rc: 1 May 24 11:48:03.536: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f8e660 exit status 1 true [0xc000a084a0 0xc000a08518 0xc000a08540] [0xc000a084a0 0xc000a08518 0xc000a08540] [0xc000a084f8 0xc000a08530] [0x935700 0x935700] 0xc0013b88a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:48:13.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:48:13.630: INFO: rc: 1 May 24 11:48:13.630: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f8e7b0 exit status 1 true [0xc000a08548 0xc000a08590 0xc000a085f0] [0xc000a08548 0xc000a08590 0xc000a085f0] [0xc000a08588 0xc000a085e0] [0x935700 0x935700] 0xc0013b8c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:48:23.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:48:23.721: INFO: rc: 1 May 24 11:48:23.721: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c40450 exit status 1 true [0xc000766500 0xc000766680 0xc000766808] [0xc000766500 0xc000766680 0xc000766808] [0xc0007665e0 0xc000766760] [0x935700 0x935700] 0xc00205aa80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:48:33.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:48:33.809: INFO: rc: 1 May 24 11:48:33.809: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f70330 exit status 1 true [0xc000a7e0b0 0xc000a7e128 0xc000a7e190] [0xc000a7e0b0 0xc000a7e128 0xc000a7e190] [0xc000a7e100 0xc000a7e180] [0x935700 0x935700] 0xc001910840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 24 11:48:43.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qqgrq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 24 11:48:43.899: INFO: rc: 1 May 24 11:48:43.899: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 24 11:48:43.899: INFO: Scaling statefulset ss to 0 May 24 11:48:43.976: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 24 11:48:43.979: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qqgrq May 24 11:48:43.981: INFO: Scaling statefulset ss to 0 May 24 11:48:43.990: INFO: Waiting for statefulset status.replicas updated to 0 May 24 11:48:43.993: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:48:44.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-qqgrq" for this suite. May 24 11:48:50.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:48:50.085: INFO: namespace: e2e-tests-statefulset-qqgrq, resource: bindings, ignored listing per whitelist May 24 11:48:50.120: INFO: namespace e2e-tests-statefulset-qqgrq deletion completed in 6.108838171s • [SLOW TEST:371.407 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:48:50.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-88c3d902-9db4-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 11:48:50.277: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-88c76eb2-9db4-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-sxcqz" to be "success or failure" May 24 11:48:50.287: INFO: Pod "pod-projected-configmaps-88c76eb2-9db4-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 9.833462ms May 24 11:48:52.291: INFO: Pod "pod-projected-configmaps-88c76eb2-9db4-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013935683s May 24 11:48:54.295: INFO: Pod "pod-projected-configmaps-88c76eb2-9db4-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017895604s STEP: Saw pod success May 24 11:48:54.295: INFO: Pod "pod-projected-configmaps-88c76eb2-9db4-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:48:54.298: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-88c76eb2-9db4-11ea-9618-0242ac110016 container projected-configmap-volume-test: STEP: delete the pod May 24 11:48:54.335: INFO: Waiting for pod pod-projected-configmaps-88c76eb2-9db4-11ea-9618-0242ac110016 to disappear May 24 11:48:54.341: INFO: Pod pod-projected-configmaps-88c76eb2-9db4-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:48:54.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sxcqz" for this suite. May 24 11:49:00.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:49:00.438: INFO: namespace: e2e-tests-projected-sxcqz, resource: bindings, ignored listing per whitelist May 24 11:49:00.471: INFO: namespace e2e-tests-projected-sxcqz deletion completed in 6.127202958s • [SLOW TEST:10.351 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:49:00.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 24 11:49:00.565: INFO: Waiting up to 5m0s for pod "client-containers-8eea0ab0-9db4-11ea-9618-0242ac110016" in namespace "e2e-tests-containers-tqx4q" to be "success or failure" May 24 11:49:00.569: INFO: Pod "client-containers-8eea0ab0-9db4-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060949ms May 24 11:49:02.573: INFO: Pod "client-containers-8eea0ab0-9db4-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008223763s May 24 11:49:04.577: INFO: Pod "client-containers-8eea0ab0-9db4-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 4.012198316s May 24 11:49:06.582: INFO: Pod "client-containers-8eea0ab0-9db4-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016696419s STEP: Saw pod success May 24 11:49:06.582: INFO: Pod "client-containers-8eea0ab0-9db4-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:49:06.585: INFO: Trying to get logs from node hunter-worker2 pod client-containers-8eea0ab0-9db4-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 11:49:06.630: INFO: Waiting for pod client-containers-8eea0ab0-9db4-11ea-9618-0242ac110016 to disappear May 24 11:49:06.635: INFO: Pod client-containers-8eea0ab0-9db4-11ea-9618-0242ac110016 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:49:06.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-tqx4q" for this suite. May 24 11:49:12.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:49:12.684: INFO: namespace: e2e-tests-containers-tqx4q, resource: bindings, ignored listing per whitelist May 24 11:49:12.719: INFO: namespace e2e-tests-containers-tqx4q deletion completed in 6.081191657s • [SLOW TEST:12.247 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:49:12.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-mwdfs I0524 11:49:12.839867 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-mwdfs, replica count: 1 I0524 11:49:13.890337 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 11:49:14.890519 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 11:49:15.890709 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 11:49:16.039: INFO: Created: latency-svc-5h9sk May 24 11:49:16.062: INFO: Got endpoints: latency-svc-5h9sk [71.841554ms] May 24 11:49:16.140: INFO: Created: latency-svc-746n6 May 24 11:49:16.151: INFO: Got endpoints: latency-svc-746n6 [88.052133ms] May 24 11:49:16.181: INFO: Created: latency-svc-5hbtc May 24 11:49:16.211: INFO: Got endpoints: latency-svc-5hbtc [148.155509ms] May 24 11:49:16.262: INFO: Created: latency-svc-c2x8p May 24 11:49:16.271: INFO: Got endpoints: latency-svc-c2x8p [208.524548ms] May 24 11:49:16.290: INFO: Created: latency-svc-x5vj2 May 24 11:49:16.302: INFO: Got endpoints: latency-svc-x5vj2 [239.30716ms] May 24 11:49:16.321: INFO: Created: latency-svc-pm2bj May 24 11:49:16.338: INFO: Got endpoints: latency-svc-pm2bj [275.807704ms] May 24 11:49:16.360: INFO: Created: latency-svc-69qtm May 24 11:49:16.405: INFO: Got endpoints: latency-svc-69qtm [342.780796ms] May 24 11:49:16.414: INFO: Created: latency-svc-t5z29 May 24 11:49:16.429: INFO: Got endpoints: latency-svc-t5z29 [366.531247ms] May 24 11:49:16.451: INFO: Created: latency-svc-pm5rb May 24 11:49:16.465: INFO: Got endpoints: latency-svc-pm5rb [402.952239ms] May 24 11:49:16.487: INFO: Created: latency-svc-vqbck May 24 11:49:16.501: INFO: Got endpoints: latency-svc-vqbck [438.736505ms] May 24 11:49:16.544: INFO: Created: latency-svc-cr4mj May 24 11:49:16.547: INFO: Got endpoints: latency-svc-cr4mj [484.389836ms] May 24 11:49:16.573: INFO: Created: latency-svc-d56zk May 24 11:49:16.585: INFO: Got endpoints: latency-svc-d56zk [522.708119ms] May 24 11:49:16.609: INFO: Created: latency-svc-hp6qf May 24 11:49:16.747: INFO: Got endpoints: latency-svc-hp6qf [684.736405ms] May 24 11:49:16.751: INFO: Created: latency-svc-8ktn2 May 24 11:49:16.759: INFO: Got endpoints: latency-svc-8ktn2 [696.646653ms] May 24 11:49:16.800: INFO: Created: latency-svc-dd69v May 24 11:49:16.814: INFO: Got endpoints: latency-svc-dd69v [751.114373ms] May 24 11:49:16.885: INFO: Created: latency-svc-hmff8 May 24 11:49:16.887: INFO: Got endpoints: latency-svc-hmff8 [824.768735ms] May 24 11:49:16.949: INFO: Created: latency-svc-zjhnf May 24 11:49:16.958: INFO: Got endpoints: latency-svc-zjhnf [807.035856ms] May 24 11:49:16.984: INFO: Created: latency-svc-cp7wb May 24 11:49:17.034: INFO: Got endpoints: latency-svc-cp7wb [823.590277ms] May 24 11:49:17.046: INFO: Created: latency-svc-hf8m6 May 24 11:49:17.076: INFO: Got endpoints: latency-svc-hf8m6 [805.041778ms] May 24 11:49:17.107: INFO: Created: latency-svc-p7vld May 24 11:49:17.121: INFO: Got endpoints: latency-svc-p7vld [819.222605ms] May 24 11:49:17.178: INFO: Created: latency-svc-dhtfm May 24 11:49:17.183: INFO: Got endpoints: latency-svc-dhtfm [844.141479ms] May 24 11:49:17.207: INFO: Created: latency-svc-cswqg May 24 11:49:17.223: INFO: Got endpoints: latency-svc-cswqg [817.805652ms] May 24 11:49:17.242: INFO: Created: latency-svc-jzcpt May 24 11:49:17.253: INFO: Got endpoints: latency-svc-jzcpt [824.27014ms] May 24 11:49:17.274: INFO: Created: latency-svc-sltv2 May 24 11:49:17.315: INFO: Got endpoints: latency-svc-sltv2 [849.937646ms] May 24 11:49:17.328: INFO: Created: latency-svc-qgtrz May 24 11:49:17.360: INFO: Got endpoints: latency-svc-qgtrz [858.362251ms] May 24 11:49:17.411: INFO: Created: latency-svc-cs6lq May 24 11:49:17.489: INFO: Got endpoints: latency-svc-cs6lq [942.227342ms] May 24 11:49:17.532: INFO: Created: latency-svc-z7qg5 May 24 11:49:17.548: INFO: Got endpoints: latency-svc-z7qg5 [962.817276ms] May 24 11:49:17.574: INFO: Created: latency-svc-6bvmw May 24 11:49:17.663: INFO: Got endpoints: latency-svc-6bvmw [915.575111ms] May 24 11:49:17.665: INFO: Created: latency-svc-9w4pj May 24 11:49:17.681: INFO: Got endpoints: latency-svc-9w4pj [921.189425ms] May 24 11:49:17.716: INFO: Created: latency-svc-7xbdh May 24 11:49:17.729: INFO: Got endpoints: latency-svc-7xbdh [915.596308ms] May 24 11:49:17.760: INFO: Created: latency-svc-v7bp8 May 24 11:49:17.814: INFO: Got endpoints: latency-svc-v7bp8 [926.336826ms] May 24 11:49:17.844: INFO: Created: latency-svc-dsgj8 May 24 11:49:17.862: INFO: Got endpoints: latency-svc-dsgj8 [903.807941ms] May 24 11:49:17.890: INFO: Created: latency-svc-szt7l May 24 11:49:17.938: INFO: Got endpoints: latency-svc-szt7l [904.00126ms] May 24 11:49:17.951: INFO: Created: latency-svc-k8nkr May 24 11:49:17.964: INFO: Got endpoints: latency-svc-k8nkr [888.007455ms] May 24 11:49:17.998: INFO: Created: latency-svc-gj27p May 24 11:49:18.012: INFO: Got endpoints: latency-svc-gj27p [891.016089ms] May 24 11:49:18.103: INFO: Created: latency-svc-zqx84 May 24 11:49:18.120: INFO: Got endpoints: latency-svc-zqx84 [937.27001ms] May 24 11:49:18.154: INFO: Created: latency-svc-qvkbc May 24 11:49:18.169: INFO: Got endpoints: latency-svc-qvkbc [945.303562ms] May 24 11:49:18.190: INFO: Created: latency-svc-75pdp May 24 11:49:18.199: INFO: Got endpoints: latency-svc-75pdp [945.366935ms] May 24 11:49:18.251: INFO: Created: latency-svc-z47gt May 24 11:49:18.259: INFO: Got endpoints: latency-svc-z47gt [943.598476ms] May 24 11:49:18.282: INFO: Created: latency-svc-ghhnh May 24 11:49:18.296: INFO: Got endpoints: latency-svc-ghhnh [935.862688ms] May 24 11:49:18.318: INFO: Created: latency-svc-qhpgv May 24 11:49:18.333: INFO: Got endpoints: latency-svc-qhpgv [844.046167ms] May 24 11:49:18.400: INFO: Created: latency-svc-xcjj6 May 24 11:49:18.411: INFO: Got endpoints: latency-svc-xcjj6 [863.194485ms] May 24 11:49:18.459: INFO: Created: latency-svc-rwgrv May 24 11:49:18.470: INFO: Got endpoints: latency-svc-rwgrv [136.580563ms] May 24 11:49:18.490: INFO: Created: latency-svc-5chpq May 24 11:49:18.561: INFO: Got endpoints: latency-svc-5chpq [898.151992ms] May 24 11:49:18.563: INFO: Created: latency-svc-kr7sg May 24 11:49:18.566: INFO: Got endpoints: latency-svc-kr7sg [885.483393ms] May 24 11:49:18.600: INFO: Created: latency-svc-26ljn May 24 11:49:18.615: INFO: Got endpoints: latency-svc-26ljn [885.740497ms] May 24 11:49:18.646: INFO: Created: latency-svc-tl4v8 May 24 11:49:18.705: INFO: Got endpoints: latency-svc-tl4v8 [890.967237ms] May 24 11:49:18.742: INFO: Created: latency-svc-59fkm May 24 11:49:18.798: INFO: Got endpoints: latency-svc-59fkm [936.35613ms] May 24 11:49:18.873: INFO: Created: latency-svc-w42l9 May 24 11:49:18.880: INFO: Got endpoints: latency-svc-w42l9 [941.347557ms] May 24 11:49:18.915: INFO: Created: latency-svc-r4d6l May 24 11:49:18.928: INFO: Got endpoints: latency-svc-r4d6l [963.495929ms] May 24 11:49:18.958: INFO: Created: latency-svc-n7wch May 24 11:49:19.012: INFO: Got endpoints: latency-svc-n7wch [999.616762ms] May 24 11:49:19.026: INFO: Created: latency-svc-s9vwz May 24 11:49:19.042: INFO: Got endpoints: latency-svc-s9vwz [922.256377ms] May 24 11:49:19.068: INFO: Created: latency-svc-mkkbj May 24 11:49:19.098: INFO: Got endpoints: latency-svc-mkkbj [929.076589ms] May 24 11:49:19.148: INFO: Created: latency-svc-8qdn2 May 24 11:49:19.157: INFO: Got endpoints: latency-svc-8qdn2 [957.728442ms] May 24 11:49:19.179: INFO: Created: latency-svc-dztsv May 24 11:49:19.193: INFO: Got endpoints: latency-svc-dztsv [934.005892ms] May 24 11:49:19.218: INFO: Created: latency-svc-fm8b8 May 24 11:49:19.230: INFO: Got endpoints: latency-svc-fm8b8 [933.874609ms] May 24 11:49:19.298: INFO: Created: latency-svc-b4k6s May 24 11:49:19.307: INFO: Got endpoints: latency-svc-b4k6s [895.468325ms] May 24 11:49:19.343: INFO: Created: latency-svc-x6pmn May 24 11:49:19.356: INFO: Got endpoints: latency-svc-x6pmn [885.576472ms] May 24 11:49:19.379: INFO: Created: latency-svc-tddsq May 24 11:49:19.459: INFO: Got endpoints: latency-svc-tddsq [898.011268ms] May 24 11:49:19.459: INFO: Created: latency-svc-lp5dw May 24 11:49:19.476: INFO: Got endpoints: latency-svc-lp5dw [910.192453ms] May 24 11:49:19.597: INFO: Created: latency-svc-tb7zn May 24 11:49:19.641: INFO: Got endpoints: latency-svc-tb7zn [1.025839669s] May 24 11:49:19.741: INFO: Created: latency-svc-x6rcm May 24 11:49:19.764: INFO: Got endpoints: latency-svc-x6rcm [1.058920886s] May 24 11:49:19.794: INFO: Created: latency-svc-vs2nh May 24 11:49:19.807: INFO: Got endpoints: latency-svc-vs2nh [1.008538526s] May 24 11:49:19.830: INFO: Created: latency-svc-chm5n May 24 11:49:19.903: INFO: Got endpoints: latency-svc-chm5n [1.02277158s] May 24 11:49:19.905: INFO: Created: latency-svc-swhmv May 24 11:49:19.909: INFO: Got endpoints: latency-svc-swhmv [981.509071ms] May 24 11:49:19.935: INFO: Created: latency-svc-zkz98 May 24 11:49:19.946: INFO: Got endpoints: latency-svc-zkz98 [933.794601ms] May 24 11:49:19.978: INFO: Created: latency-svc-pth46 May 24 11:49:19.988: INFO: Got endpoints: latency-svc-pth46 [945.290768ms] May 24 11:49:20.040: INFO: Created: latency-svc-s69rm May 24 11:49:20.043: INFO: Got endpoints: latency-svc-s69rm [945.181974ms] May 24 11:49:20.070: INFO: Created: latency-svc-rc8tm May 24 11:49:20.084: INFO: Got endpoints: latency-svc-rc8tm [927.627776ms] May 24 11:49:20.139: INFO: Created: latency-svc-9nbbd May 24 11:49:20.184: INFO: Got endpoints: latency-svc-9nbbd [990.900577ms] May 24 11:49:20.208: INFO: Created: latency-svc-bqnsh May 24 11:49:20.223: INFO: Got endpoints: latency-svc-bqnsh [993.204036ms] May 24 11:49:20.243: INFO: Created: latency-svc-9bfnd May 24 11:49:20.259: INFO: Got endpoints: latency-svc-9bfnd [952.025674ms] May 24 11:49:20.280: INFO: Created: latency-svc-ldklj May 24 11:49:20.339: INFO: Got endpoints: latency-svc-ldklj [983.516129ms] May 24 11:49:20.361: INFO: Created: latency-svc-n5s49 May 24 11:49:20.373: INFO: Got endpoints: latency-svc-n5s49 [914.225995ms] May 24 11:49:20.397: INFO: Created: latency-svc-6zmnm May 24 11:49:20.410: INFO: Got endpoints: latency-svc-6zmnm [933.442682ms] May 24 11:49:20.429: INFO: Created: latency-svc-c697p May 24 11:49:20.489: INFO: Got endpoints: latency-svc-c697p [848.124629ms] May 24 11:49:20.491: INFO: Created: latency-svc-2vbsc May 24 11:49:20.500: INFO: Got endpoints: latency-svc-2vbsc [736.2699ms] May 24 11:49:20.519: INFO: Created: latency-svc-npkmf May 24 11:49:20.537: INFO: Got endpoints: latency-svc-npkmf [730.331329ms] May 24 11:49:20.559: INFO: Created: latency-svc-ppsf7 May 24 11:49:20.583: INFO: Got endpoints: latency-svc-ppsf7 [680.201835ms] May 24 11:49:20.640: INFO: Created: latency-svc-mg2hb May 24 11:49:20.645: INFO: Got endpoints: latency-svc-mg2hb [735.779816ms] May 24 11:49:20.670: INFO: Created: latency-svc-sfqtn May 24 11:49:20.681: INFO: Got endpoints: latency-svc-sfqtn [735.488711ms] May 24 11:49:20.705: INFO: Created: latency-svc-n448f May 24 11:49:20.718: INFO: Got endpoints: latency-svc-n448f [730.05893ms] May 24 11:49:20.735: INFO: Created: latency-svc-vd4ns May 24 11:49:20.819: INFO: Got endpoints: latency-svc-vd4ns [775.62928ms] May 24 11:49:20.820: INFO: Created: latency-svc-hlxtk May 24 11:49:20.826: INFO: Got endpoints: latency-svc-hlxtk [741.333912ms] May 24 11:49:20.889: INFO: Created: latency-svc-zszbq May 24 11:49:20.975: INFO: Got endpoints: latency-svc-zszbq [790.798118ms] May 24 11:49:21.011: INFO: Created: latency-svc-msv4n May 24 11:49:21.014: INFO: Got endpoints: latency-svc-msv4n [791.070049ms] May 24 11:49:21.070: INFO: Created: latency-svc-k6mv5 May 24 11:49:21.073: INFO: Got endpoints: latency-svc-k6mv5 [814.103326ms] May 24 11:49:21.099: INFO: Created: latency-svc-ptjkc May 24 11:49:21.104: INFO: Got endpoints: latency-svc-ptjkc [765.125895ms] May 24 11:49:21.123: INFO: Created: latency-svc-gdrhj May 24 11:49:21.141: INFO: Got endpoints: latency-svc-gdrhj [767.429719ms] May 24 11:49:21.161: INFO: Created: latency-svc-k4npq May 24 11:49:21.226: INFO: Got endpoints: latency-svc-k4npq [815.940236ms] May 24 11:49:21.229: INFO: Created: latency-svc-dqtr5 May 24 11:49:21.231: INFO: Got endpoints: latency-svc-dqtr5 [741.719714ms] May 24 11:49:21.261: INFO: Created: latency-svc-k4246 May 24 11:49:21.274: INFO: Got endpoints: latency-svc-k4246 [773.733404ms] May 24 11:49:21.315: INFO: Created: latency-svc-gfcth May 24 11:49:21.375: INFO: Got endpoints: latency-svc-gfcth [838.31377ms] May 24 11:49:21.377: INFO: Created: latency-svc-bgjrv May 24 11:49:21.382: INFO: Got endpoints: latency-svc-bgjrv [799.186393ms] May 24 11:49:21.407: INFO: Created: latency-svc-6h2gx May 24 11:49:21.425: INFO: Got endpoints: latency-svc-6h2gx [779.478491ms] May 24 11:49:21.449: INFO: Created: latency-svc-4sslw May 24 11:49:21.467: INFO: Got endpoints: latency-svc-4sslw [785.432201ms] May 24 11:49:21.538: INFO: Created: latency-svc-xvg8v May 24 11:49:21.584: INFO: Got endpoints: latency-svc-xvg8v [866.483471ms] May 24 11:49:21.681: INFO: Created: latency-svc-d4qd5 May 24 11:49:21.697: INFO: Got endpoints: latency-svc-d4qd5 [878.749058ms] May 24 11:49:21.725: INFO: Created: latency-svc-8hkvq May 24 11:49:21.750: INFO: Got endpoints: latency-svc-8hkvq [924.278465ms] May 24 11:49:21.807: INFO: Created: latency-svc-nnndw May 24 11:49:21.810: INFO: Got endpoints: latency-svc-nnndw [834.511428ms] May 24 11:49:21.842: INFO: Created: latency-svc-nmmtz May 24 11:49:21.872: INFO: Got endpoints: latency-svc-nmmtz [857.827539ms] May 24 11:49:21.891: INFO: Created: latency-svc-5srbz May 24 11:49:21.944: INFO: Got endpoints: latency-svc-5srbz [871.221845ms] May 24 11:49:21.958: INFO: Created: latency-svc-s72ww May 24 11:49:21.994: INFO: Got endpoints: latency-svc-s72ww [889.739202ms] May 24 11:49:22.030: INFO: Created: latency-svc-tfv6m May 24 11:49:22.039: INFO: Got endpoints: latency-svc-tfv6m [897.596436ms] May 24 11:49:22.101: INFO: Created: latency-svc-wjl2v May 24 11:49:22.111: INFO: Got endpoints: latency-svc-wjl2v [884.971837ms] May 24 11:49:22.167: INFO: Created: latency-svc-t7ftz May 24 11:49:22.171: INFO: Got endpoints: latency-svc-t7ftz [940.173834ms] May 24 11:49:22.226: INFO: Created: latency-svc-vqprn May 24 11:49:22.228: INFO: Got endpoints: latency-svc-vqprn [954.413046ms] May 24 11:49:22.281: INFO: Created: latency-svc-r4qvk May 24 11:49:22.286: INFO: Got endpoints: latency-svc-r4qvk [910.173949ms] May 24 11:49:22.310: INFO: Created: latency-svc-c4nql May 24 11:49:22.316: INFO: Got endpoints: latency-svc-c4nql [933.633503ms] May 24 11:49:22.358: INFO: Created: latency-svc-4kvqg May 24 11:49:22.360: INFO: Got endpoints: latency-svc-4kvqg [934.990548ms] May 24 11:49:22.391: INFO: Created: latency-svc-dfcnm May 24 11:49:22.407: INFO: Got endpoints: latency-svc-dfcnm [939.830228ms] May 24 11:49:22.432: INFO: Created: latency-svc-rrjkq May 24 11:49:22.443: INFO: Got endpoints: latency-svc-rrjkq [858.456124ms] May 24 11:49:22.496: INFO: Created: latency-svc-lhr4f May 24 11:49:22.508: INFO: Got endpoints: latency-svc-lhr4f [810.335708ms] May 24 11:49:22.539: INFO: Created: latency-svc-fmdpr May 24 11:49:22.551: INFO: Got endpoints: latency-svc-fmdpr [801.095769ms] May 24 11:49:22.574: INFO: Created: latency-svc-fzww4 May 24 11:49:22.588: INFO: Got endpoints: latency-svc-fzww4 [778.280283ms] May 24 11:49:22.633: INFO: Created: latency-svc-8hxbk May 24 11:49:22.636: INFO: Got endpoints: latency-svc-8hxbk [763.928186ms] May 24 11:49:22.666: INFO: Created: latency-svc-9nr9j May 24 11:49:22.681: INFO: Got endpoints: latency-svc-9nr9j [736.68449ms] May 24 11:49:22.702: INFO: Created: latency-svc-mqp4x May 24 11:49:22.721: INFO: Got endpoints: latency-svc-mqp4x [726.308231ms] May 24 11:49:22.772: INFO: Created: latency-svc-fxz46 May 24 11:49:22.775: INFO: Got endpoints: latency-svc-fxz46 [735.95916ms] May 24 11:49:22.869: INFO: Created: latency-svc-dhrlv May 24 11:49:22.920: INFO: Got endpoints: latency-svc-dhrlv [809.628824ms] May 24 11:49:22.943: INFO: Created: latency-svc-kqd8x May 24 11:49:22.967: INFO: Got endpoints: latency-svc-kqd8x [795.806786ms] May 24 11:49:22.994: INFO: Created: latency-svc-wxs94 May 24 11:49:23.010: INFO: Got endpoints: latency-svc-wxs94 [781.169372ms] May 24 11:49:23.060: INFO: Created: latency-svc-qpt77 May 24 11:49:23.062: INFO: Got endpoints: latency-svc-qpt77 [776.787351ms] May 24 11:49:23.090: INFO: Created: latency-svc-v57w6 May 24 11:49:23.106: INFO: Got endpoints: latency-svc-v57w6 [790.358905ms] May 24 11:49:23.128: INFO: Created: latency-svc-x7xpq May 24 11:49:23.146: INFO: Got endpoints: latency-svc-x7xpq [785.91954ms] May 24 11:49:23.203: INFO: Created: latency-svc-vjjjk May 24 11:49:23.205: INFO: Got endpoints: latency-svc-vjjjk [798.316792ms] May 24 11:49:23.234: INFO: Created: latency-svc-kjr8c May 24 11:49:23.251: INFO: Got endpoints: latency-svc-kjr8c [808.10963ms] May 24 11:49:23.282: INFO: Created: latency-svc-h6x89 May 24 11:49:23.340: INFO: Got endpoints: latency-svc-h6x89 [831.706747ms] May 24 11:49:23.344: INFO: Created: latency-svc-x5p58 May 24 11:49:23.359: INFO: Got endpoints: latency-svc-x5p58 [808.118981ms] May 24 11:49:23.380: INFO: Created: latency-svc-qms8f May 24 11:49:23.390: INFO: Got endpoints: latency-svc-qms8f [801.889649ms] May 24 11:49:23.417: INFO: Created: latency-svc-xvpz7 May 24 11:49:23.501: INFO: Got endpoints: latency-svc-xvpz7 [865.40527ms] May 24 11:49:23.510: INFO: Created: latency-svc-sxqs8 May 24 11:49:23.522: INFO: Got endpoints: latency-svc-sxqs8 [841.22683ms] May 24 11:49:23.546: INFO: Created: latency-svc-dpjxm May 24 11:49:23.559: INFO: Got endpoints: latency-svc-dpjxm [838.19282ms] May 24 11:49:23.596: INFO: Created: latency-svc-nl22x May 24 11:49:23.639: INFO: Got endpoints: latency-svc-nl22x [864.487974ms] May 24 11:49:23.650: INFO: Created: latency-svc-r5zn7 May 24 11:49:23.667: INFO: Got endpoints: latency-svc-r5zn7 [746.720538ms] May 24 11:49:23.714: INFO: Created: latency-svc-zrjss May 24 11:49:23.734: INFO: Got endpoints: latency-svc-zrjss [766.35492ms] May 24 11:49:23.783: INFO: Created: latency-svc-8rpf4 May 24 11:49:23.794: INFO: Got endpoints: latency-svc-8rpf4 [784.336102ms] May 24 11:49:23.825: INFO: Created: latency-svc-5rphs May 24 11:49:23.836: INFO: Got endpoints: latency-svc-5rphs [773.334959ms] May 24 11:49:23.873: INFO: Created: latency-svc-gfz4r May 24 11:49:23.933: INFO: Got endpoints: latency-svc-gfz4r [826.937083ms] May 24 11:49:23.942: INFO: Created: latency-svc-l52sd May 24 11:49:23.956: INFO: Got endpoints: latency-svc-l52sd [810.342333ms] May 24 11:49:23.984: INFO: Created: latency-svc-sfqpv May 24 11:49:23.998: INFO: Got endpoints: latency-svc-sfqpv [793.435217ms] May 24 11:49:24.071: INFO: Created: latency-svc-h8t4t May 24 11:49:24.074: INFO: Got endpoints: latency-svc-h8t4t [822.56826ms] May 24 11:49:24.112: INFO: Created: latency-svc-zvhmf May 24 11:49:24.126: INFO: Got endpoints: latency-svc-zvhmf [785.985651ms] May 24 11:49:24.164: INFO: Created: latency-svc-hgmlg May 24 11:49:24.238: INFO: Got endpoints: latency-svc-hgmlg [878.321974ms] May 24 11:49:24.286: INFO: Created: latency-svc-7szlz May 24 11:49:24.300: INFO: Got endpoints: latency-svc-7szlz [909.84285ms] May 24 11:49:24.323: INFO: Created: latency-svc-vxqrb May 24 11:49:24.336: INFO: Got endpoints: latency-svc-vxqrb [834.712543ms] May 24 11:49:24.384: INFO: Created: latency-svc-pbnlt May 24 11:49:24.388: INFO: Got endpoints: latency-svc-pbnlt [865.826629ms] May 24 11:49:24.434: INFO: Created: latency-svc-wqmxr May 24 11:49:24.466: INFO: Got endpoints: latency-svc-wqmxr [907.301343ms] May 24 11:49:24.531: INFO: Created: latency-svc-pv7wm May 24 11:49:24.534: INFO: Got endpoints: latency-svc-pv7wm [894.691053ms] May 24 11:49:24.562: INFO: Created: latency-svc-7vt45 May 24 11:49:24.571: INFO: Got endpoints: latency-svc-7vt45 [903.388419ms] May 24 11:49:24.596: INFO: Created: latency-svc-7swgg May 24 11:49:24.613: INFO: Got endpoints: latency-svc-7swgg [879.727449ms] May 24 11:49:24.699: INFO: Created: latency-svc-q65qc May 24 11:49:24.702: INFO: Got endpoints: latency-svc-q65qc [908.110291ms] May 24 11:49:24.730: INFO: Created: latency-svc-nptzn May 24 11:49:24.745: INFO: Got endpoints: latency-svc-nptzn [909.475306ms] May 24 11:49:24.772: INFO: Created: latency-svc-svn27 May 24 11:49:24.782: INFO: Got endpoints: latency-svc-svn27 [848.661052ms] May 24 11:49:24.834: INFO: Created: latency-svc-d26sn May 24 11:49:24.836: INFO: Got endpoints: latency-svc-d26sn [879.942551ms] May 24 11:49:24.883: INFO: Created: latency-svc-td5kt May 24 11:49:24.908: INFO: Got endpoints: latency-svc-td5kt [909.843265ms] May 24 11:49:24.968: INFO: Created: latency-svc-bbgpz May 24 11:49:25.006: INFO: Got endpoints: latency-svc-bbgpz [932.175054ms] May 24 11:49:25.052: INFO: Created: latency-svc-p8pnl May 24 11:49:25.065: INFO: Got endpoints: latency-svc-p8pnl [939.066728ms] May 24 11:49:25.124: INFO: Created: latency-svc-cznj2 May 24 11:49:25.131: INFO: Got endpoints: latency-svc-cznj2 [893.037611ms] May 24 11:49:25.154: INFO: Created: latency-svc-x8xlt May 24 11:49:25.167: INFO: Got endpoints: latency-svc-x8xlt [867.583691ms] May 24 11:49:25.192: INFO: Created: latency-svc-grrpk May 24 11:49:25.210: INFO: Got endpoints: latency-svc-grrpk [874.447238ms] May 24 11:49:25.268: INFO: Created: latency-svc-qgxkj May 24 11:49:25.282: INFO: Got endpoints: latency-svc-qgxkj [893.292103ms] May 24 11:49:25.316: INFO: Created: latency-svc-xnjj4 May 24 11:49:25.330: INFO: Got endpoints: latency-svc-xnjj4 [864.288787ms] May 24 11:49:25.358: INFO: Created: latency-svc-25khj May 24 11:49:25.400: INFO: Got endpoints: latency-svc-25khj [865.865251ms] May 24 11:49:25.426: INFO: Created: latency-svc-wllcv May 24 11:49:25.439: INFO: Got endpoints: latency-svc-wllcv [868.178301ms] May 24 11:49:25.462: INFO: Created: latency-svc-28xsn May 24 11:49:25.487: INFO: Got endpoints: latency-svc-28xsn [873.779757ms] May 24 11:49:25.537: INFO: Created: latency-svc-mn68p May 24 11:49:25.541: INFO: Got endpoints: latency-svc-mn68p [839.285917ms] May 24 11:49:25.562: INFO: Created: latency-svc-dbqc7 May 24 11:49:25.603: INFO: Got endpoints: latency-svc-dbqc7 [857.933644ms] May 24 11:49:25.682: INFO: Created: latency-svc-f6lwx May 24 11:49:25.725: INFO: Got endpoints: latency-svc-f6lwx [943.542297ms] May 24 11:49:25.838: INFO: Created: latency-svc-rt9hh May 24 11:49:25.840: INFO: Got endpoints: latency-svc-rt9hh [1.003904983s] May 24 11:49:25.912: INFO: Created: latency-svc-gjmhx May 24 11:49:25.998: INFO: Got endpoints: latency-svc-gjmhx [1.089879134s] May 24 11:49:26.018: INFO: Created: latency-svc-76msh May 24 11:49:26.037: INFO: Got endpoints: latency-svc-76msh [1.03131514s] May 24 11:49:26.059: INFO: Created: latency-svc-8qlqd May 24 11:49:26.068: INFO: Got endpoints: latency-svc-8qlqd [1.002711551s] May 24 11:49:26.167: INFO: Created: latency-svc-pxjhh May 24 11:49:26.193: INFO: Created: latency-svc-tq7vc May 24 11:49:26.224: INFO: Got endpoints: latency-svc-tq7vc [1.056980297s] May 24 11:49:26.225: INFO: Got endpoints: latency-svc-pxjhh [1.093509328s] May 24 11:49:26.324: INFO: Created: latency-svc-mvw9k May 24 11:49:26.353: INFO: Got endpoints: latency-svc-mvw9k [1.142679838s] May 24 11:49:26.386: INFO: Created: latency-svc-wm85q May 24 11:49:26.398: INFO: Got endpoints: latency-svc-wm85q [1.116589705s] May 24 11:49:26.458: INFO: Created: latency-svc-2vv9t May 24 11:49:26.471: INFO: Got endpoints: latency-svc-2vv9t [1.140031175s] May 24 11:49:26.494: INFO: Created: latency-svc-nwlkc May 24 11:49:26.507: INFO: Got endpoints: latency-svc-nwlkc [1.106881594s] May 24 11:49:26.527: INFO: Created: latency-svc-vx9qm May 24 11:49:26.573: INFO: Got endpoints: latency-svc-vx9qm [1.134333049s] May 24 11:49:26.575: INFO: Created: latency-svc-m76hj May 24 11:49:26.599: INFO: Got endpoints: latency-svc-m76hj [1.111942446s] May 24 11:49:26.626: INFO: Created: latency-svc-csztg May 24 11:49:26.640: INFO: Got endpoints: latency-svc-csztg [1.098007432s] May 24 11:49:26.661: INFO: Created: latency-svc-6cfqx May 24 11:49:26.723: INFO: Got endpoints: latency-svc-6cfqx [1.119753015s] May 24 11:49:26.726: INFO: Created: latency-svc-fwktg May 24 11:49:26.730: INFO: Got endpoints: latency-svc-fwktg [1.004255287s] May 24 11:49:26.750: INFO: Created: latency-svc-476sj May 24 11:49:26.766: INFO: Got endpoints: latency-svc-476sj [926.16035ms] May 24 11:49:26.791: INFO: Created: latency-svc-tqmj7 May 24 11:49:26.802: INFO: Got endpoints: latency-svc-tqmj7 [803.910765ms] May 24 11:49:26.873: INFO: Created: latency-svc-9r6xd May 24 11:49:26.883: INFO: Got endpoints: latency-svc-9r6xd [845.436667ms] May 24 11:49:26.925: INFO: Created: latency-svc-rmjlm May 24 11:49:26.953: INFO: Got endpoints: latency-svc-rmjlm [885.668077ms] May 24 11:49:27.019: INFO: Created: latency-svc-t5wdl May 24 11:49:27.044: INFO: Got endpoints: latency-svc-t5wdl [819.106468ms] May 24 11:49:27.082: INFO: Created: latency-svc-vfdbv May 24 11:49:27.098: INFO: Got endpoints: latency-svc-vfdbv [873.175813ms] May 24 11:49:27.142: INFO: Created: latency-svc-shxwr May 24 11:49:27.146: INFO: Got endpoints: latency-svc-shxwr [792.536462ms] May 24 11:49:27.169: INFO: Created: latency-svc-nh66m May 24 11:49:27.182: INFO: Got endpoints: latency-svc-nh66m [783.810528ms] May 24 11:49:27.217: INFO: Created: latency-svc-7sgc9 May 24 11:49:27.230: INFO: Got endpoints: latency-svc-7sgc9 [759.756078ms] May 24 11:49:27.280: INFO: Created: latency-svc-r84sj May 24 11:49:27.291: INFO: Got endpoints: latency-svc-r84sj [784.459027ms] May 24 11:49:27.322: INFO: Created: latency-svc-jw9pc May 24 11:49:27.339: INFO: Got endpoints: latency-svc-jw9pc [765.549827ms] May 24 11:49:27.358: INFO: Created: latency-svc-v6nrk May 24 11:49:27.375: INFO: Got endpoints: latency-svc-v6nrk [775.794641ms] May 24 11:49:27.430: INFO: Created: latency-svc-j56z6 May 24 11:49:27.436: INFO: Got endpoints: latency-svc-j56z6 [796.155531ms] May 24 11:49:27.470: INFO: Created: latency-svc-mcbwf May 24 11:49:27.484: INFO: Got endpoints: latency-svc-mcbwf [760.289166ms] May 24 11:49:27.513: INFO: Created: latency-svc-x7p42 May 24 11:49:27.573: INFO: Got endpoints: latency-svc-x7p42 [843.454854ms] May 24 11:49:27.604: INFO: Created: latency-svc-sh5bw May 24 11:49:27.616: INFO: Got endpoints: latency-svc-sh5bw [849.657688ms] May 24 11:49:27.668: INFO: Created: latency-svc-s8l47 May 24 11:49:27.723: INFO: Got endpoints: latency-svc-s8l47 [920.358656ms] May 24 11:49:27.723: INFO: Latencies: [88.052133ms 136.580563ms 148.155509ms 208.524548ms 239.30716ms 275.807704ms 342.780796ms 366.531247ms 402.952239ms 438.736505ms 484.389836ms 522.708119ms 680.201835ms 684.736405ms 696.646653ms 726.308231ms 730.05893ms 730.331329ms 735.488711ms 735.779816ms 735.95916ms 736.2699ms 736.68449ms 741.333912ms 741.719714ms 746.720538ms 751.114373ms 759.756078ms 760.289166ms 763.928186ms 765.125895ms 765.549827ms 766.35492ms 767.429719ms 773.334959ms 773.733404ms 775.62928ms 775.794641ms 776.787351ms 778.280283ms 779.478491ms 781.169372ms 783.810528ms 784.336102ms 784.459027ms 785.432201ms 785.91954ms 785.985651ms 790.358905ms 790.798118ms 791.070049ms 792.536462ms 793.435217ms 795.806786ms 796.155531ms 798.316792ms 799.186393ms 801.095769ms 801.889649ms 803.910765ms 805.041778ms 807.035856ms 808.10963ms 808.118981ms 809.628824ms 810.335708ms 810.342333ms 814.103326ms 815.940236ms 817.805652ms 819.106468ms 819.222605ms 822.56826ms 823.590277ms 824.27014ms 824.768735ms 826.937083ms 831.706747ms 834.511428ms 834.712543ms 838.19282ms 838.31377ms 839.285917ms 841.22683ms 843.454854ms 844.046167ms 844.141479ms 845.436667ms 848.124629ms 848.661052ms 849.657688ms 849.937646ms 857.827539ms 857.933644ms 858.362251ms 858.456124ms 863.194485ms 864.288787ms 864.487974ms 865.40527ms 865.826629ms 865.865251ms 866.483471ms 867.583691ms 868.178301ms 871.221845ms 873.175813ms 873.779757ms 874.447238ms 878.321974ms 878.749058ms 879.727449ms 879.942551ms 884.971837ms 885.483393ms 885.576472ms 885.668077ms 885.740497ms 888.007455ms 889.739202ms 890.967237ms 891.016089ms 893.037611ms 893.292103ms 894.691053ms 895.468325ms 897.596436ms 898.011268ms 898.151992ms 903.388419ms 903.807941ms 904.00126ms 907.301343ms 908.110291ms 909.475306ms 909.84285ms 909.843265ms 910.173949ms 910.192453ms 914.225995ms 915.575111ms 915.596308ms 920.358656ms 921.189425ms 922.256377ms 924.278465ms 926.16035ms 926.336826ms 927.627776ms 929.076589ms 932.175054ms 933.442682ms 933.633503ms 933.794601ms 933.874609ms 934.005892ms 934.990548ms 935.862688ms 936.35613ms 937.27001ms 939.066728ms 939.830228ms 940.173834ms 941.347557ms 942.227342ms 943.542297ms 943.598476ms 945.181974ms 945.290768ms 945.303562ms 945.366935ms 952.025674ms 954.413046ms 957.728442ms 962.817276ms 963.495929ms 981.509071ms 983.516129ms 990.900577ms 993.204036ms 999.616762ms 1.002711551s 1.003904983s 1.004255287s 1.008538526s 1.02277158s 1.025839669s 1.03131514s 1.056980297s 1.058920886s 1.089879134s 1.093509328s 1.098007432s 1.106881594s 1.111942446s 1.116589705s 1.119753015s 1.134333049s 1.140031175s 1.142679838s] May 24 11:49:27.723: INFO: 50 %ile: 865.826629ms May 24 11:49:27.723: INFO: 90 %ile: 999.616762ms May 24 11:49:27.723: INFO: 99 %ile: 1.140031175s May 24 11:49:27.723: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:49:27.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-mwdfs" for this suite. May 24 11:49:51.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:49:51.803: INFO: namespace: e2e-tests-svc-latency-mwdfs, resource: bindings, ignored listing per whitelist May 24 11:49:51.817: INFO: namespace e2e-tests-svc-latency-mwdfs deletion completed in 24.090146605s • [SLOW TEST:39.098 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:49:51.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-ad84ec26-9db4-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 11:49:51.959: INFO: Waiting up to 5m0s for pod "pod-configmaps-ad872fd8-9db4-11ea-9618-0242ac110016" in namespace "e2e-tests-configmap-2wcs5" to be "success or failure" May 24 11:49:51.992: INFO: Pod "pod-configmaps-ad872fd8-9db4-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 33.053124ms May 24 11:49:53.996: INFO: Pod "pod-configmaps-ad872fd8-9db4-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03661889s May 24 11:49:56.000: INFO: Pod "pod-configmaps-ad872fd8-9db4-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040662065s STEP: Saw pod success May 24 11:49:56.000: INFO: Pod "pod-configmaps-ad872fd8-9db4-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:49:56.004: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-ad872fd8-9db4-11ea-9618-0242ac110016 container configmap-volume-test: STEP: delete the pod May 24 11:49:56.288: INFO: Waiting for pod pod-configmaps-ad872fd8-9db4-11ea-9618-0242ac110016 to disappear May 24 11:49:56.344: INFO: Pod pod-configmaps-ad872fd8-9db4-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:49:56.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2wcs5" for this suite. May 24 11:50:02.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:50:02.531: INFO: namespace: e2e-tests-configmap-2wcs5, resource: bindings, ignored listing per whitelist May 24 11:50:02.548: INFO: namespace e2e-tests-configmap-2wcs5 deletion completed in 6.200510881s • [SLOW TEST:10.730 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:50:02.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0524 11:50:33.251331 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 11:50:33.251: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:50:33.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-lkw89" for this suite. May 24 11:50:39.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:50:39.427: INFO: namespace: e2e-tests-gc-lkw89, resource: bindings, ignored listing per whitelist May 24 11:50:39.467: INFO: namespace e2e-tests-gc-lkw89 deletion completed in 6.212841603s • [SLOW TEST:36.920 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:50:39.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:51:10.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-rkvwl" for this suite. May 24 11:51:16.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:51:16.856: INFO: namespace: e2e-tests-container-runtime-rkvwl, resource: bindings, ignored listing per whitelist May 24 11:51:16.861: INFO: namespace e2e-tests-container-runtime-rkvwl deletion completed in 6.135778624s • [SLOW TEST:37.394 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:51:16.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-e035b684-9db4-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 11:51:16.972: INFO: Waiting up to 5m0s for pod "pod-configmaps-e037c14a-9db4-11ea-9618-0242ac110016" in namespace "e2e-tests-configmap-gjdrs" to be "success or failure" May 24 11:51:16.977: INFO: Pod "pod-configmaps-e037c14a-9db4-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468722ms May 24 11:51:18.981: INFO: Pod "pod-configmaps-e037c14a-9db4-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008971748s May 24 11:51:20.986: INFO: Pod "pod-configmaps-e037c14a-9db4-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013650992s STEP: Saw pod success May 24 11:51:20.986: INFO: Pod "pod-configmaps-e037c14a-9db4-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:51:20.988: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-e037c14a-9db4-11ea-9618-0242ac110016 container configmap-volume-test: STEP: delete the pod May 24 11:51:21.027: INFO: Waiting for pod pod-configmaps-e037c14a-9db4-11ea-9618-0242ac110016 to disappear May 24 11:51:21.036: INFO: Pod pod-configmaps-e037c14a-9db4-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:51:21.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gjdrs" for this suite. May 24 11:51:27.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:51:27.217: INFO: namespace: e2e-tests-configmap-gjdrs, resource: bindings, ignored listing per whitelist May 24 11:51:27.292: INFO: namespace e2e-tests-configmap-gjdrs deletion completed in 6.251984271s • [SLOW TEST:10.430 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:51:27.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 24 11:51:27.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2tqmm' May 24 11:51:30.211: INFO: stderr: "" May 24 11:51:30.211: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 24 11:51:31.228: INFO: Selector matched 1 pods for map[app:redis] May 24 11:51:31.228: INFO: Found 0 / 1 May 24 11:51:32.216: INFO: Selector matched 1 pods for map[app:redis] May 24 11:51:32.216: INFO: Found 0 / 1 May 24 11:51:33.216: INFO: Selector matched 1 pods for map[app:redis] May 24 11:51:33.216: INFO: Found 1 / 1 May 24 11:51:33.216: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 24 11:51:33.219: INFO: Selector matched 1 pods for map[app:redis] May 24 11:51:33.219: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 24 11:51:33.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-qwn7f --namespace=e2e-tests-kubectl-2tqmm -p {"metadata":{"annotations":{"x":"y"}}}' May 24 11:51:33.334: INFO: stderr: "" May 24 11:51:33.334: INFO: stdout: "pod/redis-master-qwn7f patched\n" STEP: checking annotations May 24 11:51:33.336: INFO: Selector matched 1 pods for map[app:redis] May 24 11:51:33.337: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:51:33.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2tqmm" for this suite. May 24 11:51:55.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:51:55.428: INFO: namespace: e2e-tests-kubectl-2tqmm, resource: bindings, ignored listing per whitelist May 24 11:51:55.451: INFO: namespace e2e-tests-kubectl-2tqmm deletion completed in 22.111225378s • [SLOW TEST:28.159 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:51:55.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-ng5j9.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-ng5j9.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-ng5j9.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-ng5j9.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-ng5j9.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-ng5j9.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 11:52:01.639: INFO: DNS probes using e2e-tests-dns-ng5j9/dns-test-f73639a3-9db4-11ea-9618-0242ac110016 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:52:01.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-ng5j9" for this suite. May 24 11:52:07.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:52:07.768: INFO: namespace: e2e-tests-dns-ng5j9, resource: bindings, ignored listing per whitelist May 24 11:52:07.846: INFO: namespace e2e-tests-dns-ng5j9 deletion completed in 6.11297401s • [SLOW TEST:12.394 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:52:07.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 24 11:52:11.990: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-fe9756f0-9db4-11ea-9618-0242ac110016,GenerateName:,Namespace:e2e-tests-events-tvm8q,SelfLink:/api/v1/namespaces/e2e-tests-events-tvm8q/pods/send-events-fe9756f0-9db4-11ea-9618-0242ac110016,UID:fe9bfbee-9db4-11ea-99e8-0242ac110002,ResourceVersion:12268337,Generation:0,CreationTimestamp:2020-05-24 11:52:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 922665528,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-84kvr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-84kvr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-84kvr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b914d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b914f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:52:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:52:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:52:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-24 11:52:07 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.93,StartTime:2020-05-24 11:52:07 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-24 11:52:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://1865e4535093d1b6b32cb984aae2c366d7f00858d63772ce7b688b865cbce537}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 24 11:52:13.995: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 24 11:52:16.000: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:52:16.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-tvm8q" for this suite. May 24 11:52:54.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:52:54.081: INFO: namespace: e2e-tests-events-tvm8q, resource: bindings, ignored listing per whitelist May 24 11:52:54.142: INFO: namespace e2e-tests-events-tvm8q deletion completed in 38.098487122s • [SLOW TEST:46.296 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:52:54.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-1a3b9adf-9db5-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 11:52:54.323: INFO: Waiting up to 5m0s for pod "pod-secrets-1a3c1541-9db5-11ea-9618-0242ac110016" in namespace "e2e-tests-secrets-5n9vn" to be "success or failure" May 24 11:52:54.338: INFO: Pod "pod-secrets-1a3c1541-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 15.113604ms May 24 11:52:56.402: INFO: Pod "pod-secrets-1a3c1541-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079232246s May 24 11:52:58.406: INFO: Pod "pod-secrets-1a3c1541-9db5-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083130905s STEP: Saw pod success May 24 11:52:58.406: INFO: Pod "pod-secrets-1a3c1541-9db5-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:52:58.408: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-1a3c1541-9db5-11ea-9618-0242ac110016 container secret-volume-test: STEP: delete the pod May 24 11:52:58.461: INFO: Waiting for pod pod-secrets-1a3c1541-9db5-11ea-9618-0242ac110016 to disappear May 24 11:52:58.475: INFO: Pod pod-secrets-1a3c1541-9db5-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:52:58.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5n9vn" for this suite. May 24 11:53:04.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:53:04.550: INFO: namespace: e2e-tests-secrets-5n9vn, resource: bindings, ignored listing per whitelist May 24 11:53:04.594: INFO: namespace e2e-tests-secrets-5n9vn deletion completed in 6.114062118s • [SLOW TEST:10.452 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:53:04.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-206d8dc0-9db5-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 11:53:04.716: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-20704b22-9db5-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-wzv7h" to be "success or failure" May 24 11:53:04.751: INFO: Pod "pod-projected-secrets-20704b22-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 34.413528ms May 24 11:53:06.755: INFO: Pod "pod-projected-secrets-20704b22-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03882671s May 24 11:53:08.760: INFO: Pod "pod-projected-secrets-20704b22-9db5-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043537134s STEP: Saw pod success May 24 11:53:08.760: INFO: Pod "pod-projected-secrets-20704b22-9db5-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:53:08.763: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-20704b22-9db5-11ea-9618-0242ac110016 container projected-secret-volume-test: STEP: delete the pod May 24 11:53:08.796: INFO: Waiting for pod pod-projected-secrets-20704b22-9db5-11ea-9618-0242ac110016 to disappear May 24 11:53:08.812: INFO: Pod pod-projected-secrets-20704b22-9db5-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:53:08.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wzv7h" for this suite. May 24 11:53:14.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:53:14.880: INFO: namespace: e2e-tests-projected-wzv7h, resource: bindings, ignored listing per whitelist May 24 11:53:14.890: INFO: namespace e2e-tests-projected-wzv7h deletion completed in 6.073893622s • [SLOW TEST:10.296 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:53:14.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 11:53:14.986: INFO: Waiting up to 5m0s for pod "downwardapi-volume-268a5795-9db5-11ea-9618-0242ac110016" in namespace "e2e-tests-downward-api-jxbzt" to be "success or failure" May 24 11:53:14.991: INFO: Pod "downwardapi-volume-268a5795-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 5.480645ms May 24 11:53:16.995: INFO: Pod "downwardapi-volume-268a5795-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00922012s May 24 11:53:19.000: INFO: Pod "downwardapi-volume-268a5795-9db5-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013624312s STEP: Saw pod success May 24 11:53:19.000: INFO: Pod "downwardapi-volume-268a5795-9db5-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:53:19.003: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-268a5795-9db5-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 11:53:19.047: INFO: Waiting for pod downwardapi-volume-268a5795-9db5-11ea-9618-0242ac110016 to disappear May 24 11:53:19.058: INFO: Pod downwardapi-volume-268a5795-9db5-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:53:19.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jxbzt" for this suite. May 24 11:53:25.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:53:25.115: INFO: namespace: e2e-tests-downward-api-jxbzt, resource: bindings, ignored listing per whitelist May 24 11:53:25.183: INFO: namespace e2e-tests-downward-api-jxbzt deletion completed in 6.122679184s • [SLOW TEST:10.293 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:53:25.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 11:53:25.267: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2caeb8ad-9db5-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-rvk5v" to be "success or failure" May 24 11:53:25.314: INFO: Pod "downwardapi-volume-2caeb8ad-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 46.902358ms May 24 11:53:27.368: INFO: Pod "downwardapi-volume-2caeb8ad-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100671259s May 24 11:53:29.373: INFO: Pod "downwardapi-volume-2caeb8ad-9db5-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105200506s STEP: Saw pod success May 24 11:53:29.373: INFO: Pod "downwardapi-volume-2caeb8ad-9db5-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:53:29.376: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-2caeb8ad-9db5-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 11:53:29.417: INFO: Waiting for pod downwardapi-volume-2caeb8ad-9db5-11ea-9618-0242ac110016 to disappear May 24 11:53:29.421: INFO: Pod downwardapi-volume-2caeb8ad-9db5-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:53:29.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rvk5v" for this suite. May 24 11:53:35.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:53:35.491: INFO: namespace: e2e-tests-projected-rvk5v, resource: bindings, ignored listing per whitelist May 24 11:53:35.522: INFO: namespace e2e-tests-projected-rvk5v deletion completed in 6.098647188s • [SLOW TEST:10.339 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:53:35.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:53:39.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-hsbpw" for this suite. May 24 11:54:17.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:54:17.715: INFO: namespace: e2e-tests-kubelet-test-hsbpw, resource: bindings, ignored listing per whitelist May 24 11:54:17.773: INFO: namespace e2e-tests-kubelet-test-hsbpw deletion completed in 38.085236334s • [SLOW TEST:42.250 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:54:17.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 24 11:54:17.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lq7h9' May 24 11:54:18.194: INFO: stderr: "" May 24 11:54:18.194: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 24 11:54:19.198: INFO: Selector matched 1 pods for map[app:redis] May 24 11:54:19.199: INFO: Found 0 / 1 May 24 11:54:20.261: INFO: Selector matched 1 pods for map[app:redis] May 24 11:54:20.261: INFO: Found 0 / 1 May 24 11:54:21.198: INFO: Selector matched 1 pods for map[app:redis] May 24 11:54:21.198: INFO: Found 0 / 1 May 24 11:54:22.199: INFO: Selector matched 1 pods for map[app:redis] May 24 11:54:22.199: INFO: Found 1 / 1 May 24 11:54:22.199: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 24 11:54:22.202: INFO: Selector matched 1 pods for map[app:redis] May 24 11:54:22.202: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 24 11:54:22.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2j9xz redis-master --namespace=e2e-tests-kubectl-lq7h9' May 24 11:54:22.313: INFO: stderr: "" May 24 11:54:22.313: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 24 May 11:54:20.979 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 May 11:54:20.979 # Server started, Redis version 3.2.12\n1:M 24 May 11:54:20.980 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 May 11:54:20.980 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 24 11:54:22.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2j9xz redis-master --namespace=e2e-tests-kubectl-lq7h9 --tail=1' May 24 11:54:22.438: INFO: stderr: "" May 24 11:54:22.438: INFO: stdout: "1:M 24 May 11:54:20.980 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 24 11:54:22.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2j9xz redis-master --namespace=e2e-tests-kubectl-lq7h9 --limit-bytes=1' May 24 11:54:22.573: INFO: stderr: "" May 24 11:54:22.573: INFO: stdout: " " STEP: exposing timestamps May 24 11:54:22.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2j9xz redis-master --namespace=e2e-tests-kubectl-lq7h9 --tail=1 --timestamps' May 24 11:54:22.681: INFO: stderr: "" May 24 11:54:22.681: INFO: stdout: "2020-05-24T11:54:20.980216044Z 1:M 24 May 11:54:20.980 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 24 11:54:25.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2j9xz redis-master --namespace=e2e-tests-kubectl-lq7h9 --since=1s' May 24 11:54:25.290: INFO: stderr: "" May 24 11:54:25.291: INFO: stdout: "" May 24 11:54:25.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2j9xz redis-master --namespace=e2e-tests-kubectl-lq7h9 --since=24h' May 24 11:54:25.400: INFO: stderr: "" May 24 11:54:25.400: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 24 May 11:54:20.979 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 May 11:54:20.979 # Server started, Redis version 3.2.12\n1:M 24 May 11:54:20.980 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 May 11:54:20.980 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 24 11:54:25.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lq7h9' May 24 11:54:25.506: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 11:54:25.506: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 24 11:54:25.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-lq7h9' May 24 11:54:25.615: INFO: stderr: "No resources found.\n" May 24 11:54:25.615: INFO: stdout: "" May 24 11:54:25.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-lq7h9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 24 11:54:25.702: INFO: stderr: "" May 24 11:54:25.702: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:54:25.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lq7h9" for this suite. May 24 11:54:47.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:54:47.810: INFO: namespace: e2e-tests-kubectl-lq7h9, resource: bindings, ignored listing per whitelist May 24 11:54:47.815: INFO: namespace e2e-tests-kubectl-lq7h9 deletion completed in 22.109246504s • [SLOW TEST:30.042 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:54:47.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-5df6297b-9db5-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 11:54:47.937: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5df6b7d8-9db5-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-z2zvm" to be "success or failure" May 24 11:54:47.940: INFO: Pod "pod-projected-secrets-5df6b7d8-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.183322ms May 24 11:54:49.944: INFO: Pod "pod-projected-secrets-5df6b7d8-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007351292s May 24 11:54:51.948: INFO: Pod "pod-projected-secrets-5df6b7d8-9db5-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011662323s STEP: Saw pod success May 24 11:54:51.948: INFO: Pod "pod-projected-secrets-5df6b7d8-9db5-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:54:51.952: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-5df6b7d8-9db5-11ea-9618-0242ac110016 container projected-secret-volume-test: STEP: delete the pod May 24 11:54:51.971: INFO: Waiting for pod pod-projected-secrets-5df6b7d8-9db5-11ea-9618-0242ac110016 to disappear May 24 11:54:51.992: INFO: Pod pod-projected-secrets-5df6b7d8-9db5-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:54:51.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z2zvm" for this suite. May 24 11:54:58.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:54:58.034: INFO: namespace: e2e-tests-projected-z2zvm, resource: bindings, ignored listing per whitelist May 24 11:54:58.092: INFO: namespace e2e-tests-projected-z2zvm deletion completed in 6.097241303s • [SLOW TEST:10.276 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:54:58.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 11:55:22.236: INFO: Container started at 2020-05-24 11:55:00 +0000 UTC, pod became ready at 2020-05-24 11:55:20 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:55:22.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-srrjc" for this suite. May 24 11:55:34.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:55:34.308: INFO: namespace: e2e-tests-container-probe-srrjc, resource: bindings, ignored listing per whitelist May 24 11:55:34.333: INFO: namespace e2e-tests-container-probe-srrjc deletion completed in 12.093134953s • [SLOW TEST:36.242 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:55:34.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 24 11:55:34.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 24 11:55:34.629: INFO: stderr: "" May 24 11:55:34.629: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:55:34.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-g4m5s" for this suite. May 24 11:55:40.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:55:40.732: INFO: namespace: e2e-tests-kubectl-g4m5s, resource: bindings, ignored listing per whitelist May 24 11:55:40.734: INFO: namespace e2e-tests-kubectl-g4m5s deletion completed in 6.10035189s • [SLOW TEST:6.400 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:55:40.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 24 11:55:40.863: INFO: Waiting up to 5m0s for pod "pod-7d829587-9db5-11ea-9618-0242ac110016" in namespace "e2e-tests-emptydir-9j5pm" to be "success or failure" May 24 11:55:40.870: INFO: Pod "pod-7d829587-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 6.708842ms May 24 11:55:42.891: INFO: Pod "pod-7d829587-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027518974s May 24 11:55:44.894: INFO: Pod "pod-7d829587-9db5-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030479412s STEP: Saw pod success May 24 11:55:44.894: INFO: Pod "pod-7d829587-9db5-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:55:44.895: INFO: Trying to get logs from node hunter-worker2 pod pod-7d829587-9db5-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 11:55:44.939: INFO: Waiting for pod pod-7d829587-9db5-11ea-9618-0242ac110016 to disappear May 24 11:55:44.974: INFO: Pod pod-7d829587-9db5-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:55:44.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9j5pm" for this suite. May 24 11:55:50.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:55:51.074: INFO: namespace: e2e-tests-emptydir-9j5pm, resource: bindings, ignored listing per whitelist May 24 11:55:51.077: INFO: namespace e2e-tests-emptydir-9j5pm deletion completed in 6.099329261s • [SLOW TEST:10.343 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:55:51.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:55:58.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-ng4zn" for this suite. May 24 11:56:20.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:56:20.258: INFO: namespace: e2e-tests-replication-controller-ng4zn, resource: bindings, ignored listing per whitelist May 24 11:56:20.332: INFO: namespace e2e-tests-replication-controller-ng4zn deletion completed in 22.120108035s • [SLOW TEST:29.254 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:56:20.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 24 11:56:20.430: INFO: Waiting up to 5m0s for pod "var-expansion-9516eb6f-9db5-11ea-9618-0242ac110016" in namespace "e2e-tests-var-expansion-xz866" to be "success or failure" May 24 11:56:20.434: INFO: Pod "var-expansion-9516eb6f-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.98696ms May 24 11:56:22.437: INFO: Pod "var-expansion-9516eb6f-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007808771s May 24 11:56:24.442: INFO: Pod "var-expansion-9516eb6f-9db5-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012215793s STEP: Saw pod success May 24 11:56:24.442: INFO: Pod "var-expansion-9516eb6f-9db5-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:56:24.445: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-9516eb6f-9db5-11ea-9618-0242ac110016 container dapi-container: STEP: delete the pod May 24 11:56:24.479: INFO: Waiting for pod var-expansion-9516eb6f-9db5-11ea-9618-0242ac110016 to disappear May 24 11:56:24.502: INFO: Pod var-expansion-9516eb6f-9db5-11ea-9618-0242ac110016 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:56:24.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-xz866" for this suite. May 24 11:56:30.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:56:30.550: INFO: namespace: e2e-tests-var-expansion-xz866, resource: bindings, ignored listing per whitelist May 24 11:56:30.608: INFO: namespace e2e-tests-var-expansion-xz866 deletion completed in 6.102515463s • [SLOW TEST:10.276 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:56:30.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-9b3fb7cb-9db5-11ea-9618-0242ac110016 May 24 11:56:30.788: INFO: Pod name my-hostname-basic-9b3fb7cb-9db5-11ea-9618-0242ac110016: Found 0 pods out of 1 May 24 11:56:35.793: INFO: Pod name my-hostname-basic-9b3fb7cb-9db5-11ea-9618-0242ac110016: Found 1 pods out of 1 May 24 11:56:35.793: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-9b3fb7cb-9db5-11ea-9618-0242ac110016" are running May 24 11:56:35.796: INFO: Pod "my-hostname-basic-9b3fb7cb-9db5-11ea-9618-0242ac110016-c8vvq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 11:56:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 11:56:33 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 11:56:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 11:56:30 +0000 UTC Reason: Message:}]) May 24 11:56:35.796: INFO: Trying to dial the pod May 24 11:56:40.814: INFO: Controller my-hostname-basic-9b3fb7cb-9db5-11ea-9618-0242ac110016: Got expected result from replica 1 [my-hostname-basic-9b3fb7cb-9db5-11ea-9618-0242ac110016-c8vvq]: "my-hostname-basic-9b3fb7cb-9db5-11ea-9618-0242ac110016-c8vvq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:56:40.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-j58jg" for this suite. May 24 11:56:46.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:56:46.860: INFO: namespace: e2e-tests-replication-controller-j58jg, resource: bindings, ignored listing per whitelist May 24 11:56:46.915: INFO: namespace e2e-tests-replication-controller-j58jg deletion completed in 6.096553986s • [SLOW TEST:16.307 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:56:46.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 24 11:56:47.011: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:56:47.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mk2kt" for this suite. May 24 11:56:53.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:56:53.147: INFO: namespace: e2e-tests-kubectl-mk2kt, resource: bindings, ignored listing per whitelist May 24 11:56:53.194: INFO: namespace e2e-tests-kubectl-mk2kt deletion completed in 6.090982026s • [SLOW TEST:6.278 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:56:53.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 11:56:53.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8acd8cd-9db5-11ea-9618-0242ac110016" in namespace "e2e-tests-downward-api-ttj9s" to be "success or failure" May 24 11:56:53.311: INFO: Pod "downwardapi-volume-a8acd8cd-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.052ms May 24 11:56:55.315: INFO: Pod "downwardapi-volume-a8acd8cd-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00677082s May 24 11:56:57.319: INFO: Pod "downwardapi-volume-a8acd8cd-9db5-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011189298s STEP: Saw pod success May 24 11:56:57.319: INFO: Pod "downwardapi-volume-a8acd8cd-9db5-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:56:57.322: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-a8acd8cd-9db5-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 11:56:57.432: INFO: Waiting for pod downwardapi-volume-a8acd8cd-9db5-11ea-9618-0242ac110016 to disappear May 24 11:56:57.448: INFO: Pod downwardapi-volume-a8acd8cd-9db5-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:56:57.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ttj9s" for this suite. May 24 11:57:03.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:57:03.497: INFO: namespace: e2e-tests-downward-api-ttj9s, resource: bindings, ignored listing per whitelist May 24 11:57:03.579: INFO: namespace e2e-tests-downward-api-ttj9s deletion completed in 6.12699854s • [SLOW TEST:10.385 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:57:03.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-r9s2l STEP: creating a selector STEP: Creating the service pods in kubernetes May 24 11:57:03.732: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 24 11:57:29.908: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.102:8080/dial?request=hostName&protocol=http&host=10.244.1.101&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-r9s2l PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:57:29.908: INFO: >>> kubeConfig: /root/.kube/config I0524 11:57:29.950004 6 log.go:172] (0xc0008be580) (0xc0010979a0) Create stream I0524 11:57:29.950037 6 log.go:172] (0xc0008be580) (0xc0010979a0) Stream added, broadcasting: 1 I0524 11:57:29.952516 6 log.go:172] (0xc0008be580) Reply frame received for 1 I0524 11:57:29.952553 6 log.go:172] (0xc0008be580) (0xc001e3fa40) Create stream I0524 11:57:29.952562 6 log.go:172] (0xc0008be580) (0xc001e3fa40) Stream added, broadcasting: 3 I0524 11:57:29.953240 6 log.go:172] (0xc0008be580) Reply frame received for 3 I0524 11:57:29.953313 6 log.go:172] (0xc0008be580) (0xc000590a00) Create stream I0524 11:57:29.953320 6 log.go:172] (0xc0008be580) (0xc000590a00) Stream added, broadcasting: 5 I0524 11:57:29.953839 6 log.go:172] (0xc0008be580) Reply frame received for 5 I0524 11:57:30.045460 6 log.go:172] (0xc0008be580) Data frame received for 3 I0524 11:57:30.045500 6 log.go:172] (0xc001e3fa40) (3) Data frame handling I0524 11:57:30.045528 6 log.go:172] (0xc001e3fa40) (3) Data frame sent I0524 11:57:30.046391 6 log.go:172] (0xc0008be580) Data frame received for 3 I0524 11:57:30.046423 6 log.go:172] (0xc001e3fa40) (3) Data frame handling I0524 11:57:30.046585 6 log.go:172] (0xc0008be580) Data frame received for 5 I0524 11:57:30.046615 6 log.go:172] (0xc000590a00) (5) Data frame handling I0524 11:57:30.048714 6 log.go:172] (0xc0008be580) Data frame received for 1 I0524 11:57:30.048749 6 log.go:172] (0xc0010979a0) (1) Data frame handling I0524 11:57:30.048813 6 log.go:172] (0xc0010979a0) (1) Data frame sent I0524 11:57:30.048845 6 log.go:172] (0xc0008be580) (0xc0010979a0) Stream removed, broadcasting: 1 I0524 11:57:30.048869 6 log.go:172] (0xc0008be580) Go away received I0524 11:57:30.048944 6 log.go:172] (0xc0008be580) (0xc0010979a0) Stream removed, broadcasting: 1 I0524 11:57:30.048957 6 log.go:172] (0xc0008be580) (0xc001e3fa40) Stream removed, broadcasting: 3 I0524 11:57:30.048964 6 log.go:172] (0xc0008be580) (0xc000590a00) Stream removed, broadcasting: 5 May 24 11:57:30.048: INFO: Waiting for endpoints: map[] May 24 11:57:30.052: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.102:8080/dial?request=hostName&protocol=http&host=10.244.2.97&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-r9s2l PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:57:30.052: INFO: >>> kubeConfig: /root/.kube/config I0524 11:57:30.084453 6 log.go:172] (0xc001e50000) (0xc001e3fc20) Create stream I0524 11:57:30.084486 6 log.go:172] (0xc001e50000) (0xc001e3fc20) Stream added, broadcasting: 1 I0524 11:57:30.086573 6 log.go:172] (0xc001e50000) Reply frame received for 1 I0524 11:57:30.086618 6 log.go:172] (0xc001e50000) (0xc001e16820) Create stream I0524 11:57:30.086637 6 log.go:172] (0xc001e50000) (0xc001e16820) Stream added, broadcasting: 3 I0524 11:57:30.087738 6 log.go:172] (0xc001e50000) Reply frame received for 3 I0524 11:57:30.087778 6 log.go:172] (0xc001e50000) (0xc001e168c0) Create stream I0524 11:57:30.087787 6 log.go:172] (0xc001e50000) (0xc001e168c0) Stream added, broadcasting: 5 I0524 11:57:30.088728 6 log.go:172] (0xc001e50000) Reply frame received for 5 I0524 11:57:30.158117 6 log.go:172] (0xc001e50000) Data frame received for 3 I0524 11:57:30.158156 6 log.go:172] (0xc001e16820) (3) Data frame handling I0524 11:57:30.158180 6 log.go:172] (0xc001e16820) (3) Data frame sent I0524 11:57:30.158541 6 log.go:172] (0xc001e50000) Data frame received for 3 I0524 11:57:30.158571 6 log.go:172] (0xc001e16820) (3) Data frame handling I0524 11:57:30.158884 6 log.go:172] (0xc001e50000) Data frame received for 5 I0524 11:57:30.158932 6 log.go:172] (0xc001e168c0) (5) Data frame handling I0524 11:57:30.160757 6 log.go:172] (0xc001e50000) Data frame received for 1 I0524 11:57:30.160806 6 log.go:172] (0xc001e3fc20) (1) Data frame handling I0524 11:57:30.160872 6 log.go:172] (0xc001e3fc20) (1) Data frame sent I0524 11:57:30.160905 6 log.go:172] (0xc001e50000) (0xc001e3fc20) Stream removed, broadcasting: 1 I0524 11:57:30.160933 6 log.go:172] (0xc001e50000) Go away received I0524 11:57:30.161046 6 log.go:172] (0xc001e50000) (0xc001e3fc20) Stream removed, broadcasting: 1 I0524 11:57:30.161068 6 log.go:172] (0xc001e50000) (0xc001e16820) Stream removed, broadcasting: 3 I0524 11:57:30.161085 6 log.go:172] (0xc001e50000) (0xc001e168c0) Stream removed, broadcasting: 5 May 24 11:57:30.161: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:57:30.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-r9s2l" for this suite. May 24 11:57:54.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:57:54.237: INFO: namespace: e2e-tests-pod-network-test-r9s2l, resource: bindings, ignored listing per whitelist May 24 11:57:54.269: INFO: namespace e2e-tests-pod-network-test-r9s2l deletion completed in 24.10366906s • [SLOW TEST:50.690 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:57:54.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-kvv74/secret-test-cd13dfa2-9db5-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 11:57:54.372: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd162b5e-9db5-11ea-9618-0242ac110016" in namespace "e2e-tests-secrets-kvv74" to be "success or failure" May 24 11:57:54.375: INFO: Pod "pod-configmaps-cd162b5e-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.641556ms May 24 11:57:56.380: INFO: Pod "pod-configmaps-cd162b5e-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008053152s May 24 11:57:58.384: INFO: Pod "pod-configmaps-cd162b5e-9db5-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012526637s STEP: Saw pod success May 24 11:57:58.384: INFO: Pod "pod-configmaps-cd162b5e-9db5-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:57:58.387: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-cd162b5e-9db5-11ea-9618-0242ac110016 container env-test: STEP: delete the pod May 24 11:57:58.410: INFO: Waiting for pod pod-configmaps-cd162b5e-9db5-11ea-9618-0242ac110016 to disappear May 24 11:57:58.414: INFO: Pod pod-configmaps-cd162b5e-9db5-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:57:58.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kvv74" for this suite. May 24 11:58:04.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:58:04.443: INFO: namespace: e2e-tests-secrets-kvv74, resource: bindings, ignored listing per whitelist May 24 11:58:04.509: INFO: namespace e2e-tests-secrets-kvv74 deletion completed in 6.091288495s • [SLOW TEST:10.240 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:58:04.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 24 11:58:04.626: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:58:12.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-c47xj" for this suite. May 24 11:58:18.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:58:18.821: INFO: namespace: e2e-tests-init-container-c47xj, resource: bindings, ignored listing per whitelist May 24 11:58:18.830: INFO: namespace e2e-tests-init-container-c47xj deletion completed in 6.112117795s • [SLOW TEST:14.320 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:58:18.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 24 11:58:18.932: INFO: namespace e2e-tests-kubectl-ncbsp May 24 11:58:18.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ncbsp' May 24 11:58:19.206: INFO: stderr: "" May 24 11:58:19.206: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 24 11:58:20.330: INFO: Selector matched 1 pods for map[app:redis] May 24 11:58:20.330: INFO: Found 0 / 1 May 24 11:58:21.214: INFO: Selector matched 1 pods for map[app:redis] May 24 11:58:21.214: INFO: Found 0 / 1 May 24 11:58:22.210: INFO: Selector matched 1 pods for map[app:redis] May 24 11:58:22.210: INFO: Found 0 / 1 May 24 11:58:23.212: INFO: Selector matched 1 pods for map[app:redis] May 24 11:58:23.212: INFO: Found 1 / 1 May 24 11:58:23.212: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 24 11:58:23.216: INFO: Selector matched 1 pods for map[app:redis] May 24 11:58:23.216: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 24 11:58:23.216: INFO: wait on redis-master startup in e2e-tests-kubectl-ncbsp May 24 11:58:23.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4p45w redis-master --namespace=e2e-tests-kubectl-ncbsp' May 24 11:58:23.332: INFO: stderr: "" May 24 11:58:23.332: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 24 May 11:58:21.967 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 May 11:58:21.967 # Server started, Redis version 3.2.12\n1:M 24 May 11:58:21.967 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 May 11:58:21.967 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 24 11:58:23.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-ncbsp' May 24 11:58:23.481: INFO: stderr: "" May 24 11:58:23.481: INFO: stdout: "service/rm2 exposed\n" May 24 11:58:23.491: INFO: Service rm2 in namespace e2e-tests-kubectl-ncbsp found. STEP: exposing service May 24 11:58:25.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-ncbsp' May 24 11:58:25.642: INFO: stderr: "" May 24 11:58:25.642: INFO: stdout: "service/rm3 exposed\n" May 24 11:58:25.750: INFO: Service rm3 in namespace e2e-tests-kubectl-ncbsp found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:58:27.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ncbsp" for this suite. May 24 11:58:49.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:58:49.815: INFO: namespace: e2e-tests-kubectl-ncbsp, resource: bindings, ignored listing per whitelist May 24 11:58:49.866: INFO: namespace e2e-tests-kubectl-ncbsp deletion completed in 22.103276946s • [SLOW TEST:31.036 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:58:49.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 24 11:58:49.994: INFO: Waiting up to 5m0s for pod "pod-ee3c5494-9db5-11ea-9618-0242ac110016" in namespace "e2e-tests-emptydir-grpnb" to be "success or failure" May 24 11:58:49.998: INFO: Pod "pod-ee3c5494-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.629051ms May 24 11:58:52.049: INFO: Pod "pod-ee3c5494-9db5-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05531979s May 24 11:58:54.122: INFO: Pod "pod-ee3c5494-9db5-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12765053s STEP: Saw pod success May 24 11:58:54.122: INFO: Pod "pod-ee3c5494-9db5-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 11:58:54.150: INFO: Trying to get logs from node hunter-worker2 pod pod-ee3c5494-9db5-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 11:58:54.184: INFO: Waiting for pod pod-ee3c5494-9db5-11ea-9618-0242ac110016 to disappear May 24 11:58:54.189: INFO: Pod pod-ee3c5494-9db5-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:58:54.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-grpnb" for this suite. May 24 11:59:00.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:59:00.309: INFO: namespace: e2e-tests-emptydir-grpnb, resource: bindings, ignored listing per whitelist May 24 11:59:00.370: INFO: namespace e2e-tests-emptydir-grpnb deletion completed in 6.177521337s • [SLOW TEST:10.503 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:59:00.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 24 11:59:00.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-pz8hj' May 24 11:59:00.654: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 24 11:59:00.654: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 24 11:59:00.676: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-l8g6j] May 24 11:59:00.676: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-l8g6j" in namespace "e2e-tests-kubectl-pz8hj" to be "running and ready" May 24 11:59:00.738: INFO: Pod "e2e-test-nginx-rc-l8g6j": Phase="Pending", Reason="", readiness=false. Elapsed: 62.615991ms May 24 11:59:02.798: INFO: Pod "e2e-test-nginx-rc-l8g6j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122370758s May 24 11:59:04.802: INFO: Pod "e2e-test-nginx-rc-l8g6j": Phase="Running", Reason="", readiness=true. Elapsed: 4.126335327s May 24 11:59:04.802: INFO: Pod "e2e-test-nginx-rc-l8g6j" satisfied condition "running and ready" May 24 11:59:04.802: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-l8g6j] May 24 11:59:04.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pz8hj' May 24 11:59:04.905: INFO: stderr: "" May 24 11:59:04.905: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 24 11:59:04.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pz8hj' May 24 11:59:05.003: INFO: stderr: "" May 24 11:59:05.003: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:59:05.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pz8hj" for this suite. May 24 11:59:27.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 11:59:27.049: INFO: namespace: e2e-tests-kubectl-pz8hj, resource: bindings, ignored listing per whitelist May 24 11:59:27.107: INFO: namespace e2e-tests-kubectl-pz8hj deletion completed in 22.100559262s • [SLOW TEST:26.737 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 11:59:27.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-tp9vz STEP: creating a selector STEP: Creating the service pods in kubernetes May 24 11:59:27.261: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 24 11:59:53.374: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.104:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-tp9vz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:59:53.374: INFO: >>> kubeConfig: /root/.kube/config I0524 11:59:53.408915 6 log.go:172] (0xc001f102c0) (0xc0022565a0) Create stream I0524 11:59:53.408946 6 log.go:172] (0xc001f102c0) (0xc0022565a0) Stream added, broadcasting: 1 I0524 11:59:53.411455 6 log.go:172] (0xc001f102c0) Reply frame received for 1 I0524 11:59:53.411489 6 log.go:172] (0xc001f102c0) (0xc0017bf9a0) Create stream I0524 11:59:53.411502 6 log.go:172] (0xc001f102c0) (0xc0017bf9a0) Stream added, broadcasting: 3 I0524 11:59:53.412523 6 log.go:172] (0xc001f102c0) Reply frame received for 3 I0524 11:59:53.412554 6 log.go:172] (0xc001f102c0) (0xc002256640) Create stream I0524 11:59:53.412565 6 log.go:172] (0xc001f102c0) (0xc002256640) Stream added, broadcasting: 5 I0524 11:59:53.413578 6 log.go:172] (0xc001f102c0) Reply frame received for 5 I0524 11:59:53.577429 6 log.go:172] (0xc001f102c0) Data frame received for 3 I0524 11:59:53.577531 6 log.go:172] (0xc0017bf9a0) (3) Data frame handling I0524 11:59:53.577571 6 log.go:172] (0xc0017bf9a0) (3) Data frame sent I0524 11:59:53.577721 6 log.go:172] (0xc001f102c0) Data frame received for 3 I0524 11:59:53.577759 6 log.go:172] (0xc0017bf9a0) (3) Data frame handling I0524 11:59:53.578299 6 log.go:172] (0xc001f102c0) Data frame received for 5 I0524 11:59:53.578328 6 log.go:172] (0xc002256640) (5) Data frame handling I0524 11:59:53.580475 6 log.go:172] (0xc001f102c0) Data frame received for 1 I0524 11:59:53.580518 6 log.go:172] (0xc0022565a0) (1) Data frame handling I0524 11:59:53.580545 6 log.go:172] (0xc0022565a0) (1) Data frame sent I0524 11:59:53.580584 6 log.go:172] (0xc001f102c0) (0xc0022565a0) Stream removed, broadcasting: 1 I0524 11:59:53.580691 6 log.go:172] (0xc001f102c0) (0xc0022565a0) Stream removed, broadcasting: 1 I0524 11:59:53.580727 6 log.go:172] (0xc001f102c0) (0xc0017bf9a0) Stream removed, broadcasting: 3 I0524 11:59:53.580755 6 log.go:172] (0xc001f102c0) (0xc002256640) Stream removed, broadcasting: 5 May 24 11:59:53.580: INFO: Found all expected endpoints: [netserver-0] I0524 11:59:53.581364 6 log.go:172] (0xc001f102c0) Go away received May 24 11:59:53.584: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.102:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-tp9vz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 11:59:53.584: INFO: >>> kubeConfig: /root/.kube/config I0524 11:59:53.620419 6 log.go:172] (0xc001a482c0) (0xc002321720) Create stream I0524 11:59:53.620459 6 log.go:172] (0xc001a482c0) (0xc002321720) Stream added, broadcasting: 1 I0524 11:59:53.622634 6 log.go:172] (0xc001a482c0) Reply frame received for 1 I0524 11:59:53.622677 6 log.go:172] (0xc001a482c0) (0xc0022566e0) Create stream I0524 11:59:53.622693 6 log.go:172] (0xc001a482c0) (0xc0022566e0) Stream added, broadcasting: 3 I0524 11:59:53.623666 6 log.go:172] (0xc001a482c0) Reply frame received for 3 I0524 11:59:53.623854 6 log.go:172] (0xc001a482c0) (0xc002321860) Create stream I0524 11:59:53.623870 6 log.go:172] (0xc001a482c0) (0xc002321860) Stream added, broadcasting: 5 I0524 11:59:53.624763 6 log.go:172] (0xc001a482c0) Reply frame received for 5 I0524 11:59:53.699778 6 log.go:172] (0xc001a482c0) Data frame received for 3 I0524 11:59:53.699804 6 log.go:172] (0xc0022566e0) (3) Data frame handling I0524 11:59:53.699819 6 log.go:172] (0xc0022566e0) (3) Data frame sent I0524 11:59:53.699921 6 log.go:172] (0xc001a482c0) Data frame received for 5 I0524 11:59:53.699934 6 log.go:172] (0xc002321860) (5) Data frame handling I0524 11:59:53.700272 6 log.go:172] (0xc001a482c0) Data frame received for 3 I0524 11:59:53.700286 6 log.go:172] (0xc0022566e0) (3) Data frame handling I0524 11:59:53.701964 6 log.go:172] (0xc001a482c0) Data frame received for 1 I0524 11:59:53.701983 6 log.go:172] (0xc002321720) (1) Data frame handling I0524 11:59:53.701999 6 log.go:172] (0xc002321720) (1) Data frame sent I0524 11:59:53.702014 6 log.go:172] (0xc001a482c0) (0xc002321720) Stream removed, broadcasting: 1 I0524 11:59:53.702029 6 log.go:172] (0xc001a482c0) Go away received I0524 11:59:53.702199 6 log.go:172] (0xc001a482c0) (0xc002321720) Stream removed, broadcasting: 1 I0524 11:59:53.702233 6 log.go:172] (0xc001a482c0) (0xc0022566e0) Stream removed, broadcasting: 3 I0524 11:59:53.702246 6 log.go:172] (0xc001a482c0) (0xc002321860) Stream removed, broadcasting: 5 May 24 11:59:53.702: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 11:59:53.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-tp9vz" for this suite. May 24 12:00:17.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:00:17.767: INFO: namespace: e2e-tests-pod-network-test-tp9vz, resource: bindings, ignored listing per whitelist May 24 12:00:17.815: INFO: namespace e2e-tests-pod-network-test-tp9vz deletion completed in 24.109127565s • [SLOW TEST:50.708 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:00:17.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 24 12:00:17.931: INFO: Waiting up to 5m0s for pod "var-expansion-22a5e2c9-9db6-11ea-9618-0242ac110016" in namespace "e2e-tests-var-expansion-fbkhl" to be "success or failure" May 24 12:00:17.935: INFO: Pod "var-expansion-22a5e2c9-9db6-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.568946ms May 24 12:00:19.938: INFO: Pod "var-expansion-22a5e2c9-9db6-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007397923s May 24 12:00:21.943: INFO: Pod "var-expansion-22a5e2c9-9db6-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012035301s STEP: Saw pod success May 24 12:00:21.943: INFO: Pod "var-expansion-22a5e2c9-9db6-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:00:21.946: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-22a5e2c9-9db6-11ea-9618-0242ac110016 container dapi-container: STEP: delete the pod May 24 12:00:21.979: INFO: Waiting for pod var-expansion-22a5e2c9-9db6-11ea-9618-0242ac110016 to disappear May 24 12:00:22.032: INFO: Pod var-expansion-22a5e2c9-9db6-11ea-9618-0242ac110016 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:00:22.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-fbkhl" for this suite. May 24 12:00:28.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:00:28.105: INFO: namespace: e2e-tests-var-expansion-fbkhl, resource: bindings, ignored listing per whitelist May 24 12:00:28.158: INFO: namespace e2e-tests-var-expansion-fbkhl deletion completed in 6.122039942s • [SLOW TEST:10.343 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:00:28.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-28cffff1-9db6-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 12:00:28.267: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-28d09032-9db6-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-mc9hw" to be "success or failure" May 24 12:00:28.284: INFO: Pod "pod-projected-configmaps-28d09032-9db6-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 17.620577ms May 24 12:00:30.289: INFO: Pod "pod-projected-configmaps-28d09032-9db6-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021887573s May 24 12:00:32.293: INFO: Pod "pod-projected-configmaps-28d09032-9db6-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025837445s STEP: Saw pod success May 24 12:00:32.293: INFO: Pod "pod-projected-configmaps-28d09032-9db6-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:00:32.295: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-28d09032-9db6-11ea-9618-0242ac110016 container projected-configmap-volume-test: STEP: delete the pod May 24 12:00:32.328: INFO: Waiting for pod pod-projected-configmaps-28d09032-9db6-11ea-9618-0242ac110016 to disappear May 24 12:00:32.342: INFO: Pod pod-projected-configmaps-28d09032-9db6-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:00:32.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mc9hw" for this suite. May 24 12:00:38.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:00:38.427: INFO: namespace: e2e-tests-projected-mc9hw, resource: bindings, ignored listing per whitelist May 24 12:00:38.468: INFO: namespace e2e-tests-projected-mc9hw deletion completed in 6.122846511s • [SLOW TEST:10.310 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:00:38.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 24 12:00:38.698: INFO: Waiting up to 5m0s for pod "downward-api-2f08e5a1-9db6-11ea-9618-0242ac110016" in namespace "e2e-tests-downward-api-9rp9b" to be "success or failure" May 24 12:00:38.744: INFO: Pod "downward-api-2f08e5a1-9db6-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 45.627991ms May 24 12:00:40.829: INFO: Pod "downward-api-2f08e5a1-9db6-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130957845s May 24 12:00:42.834: INFO: Pod "downward-api-2f08e5a1-9db6-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.13569233s STEP: Saw pod success May 24 12:00:42.834: INFO: Pod "downward-api-2f08e5a1-9db6-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:00:42.837: INFO: Trying to get logs from node hunter-worker2 pod downward-api-2f08e5a1-9db6-11ea-9618-0242ac110016 container dapi-container: STEP: delete the pod May 24 12:00:42.877: INFO: Waiting for pod downward-api-2f08e5a1-9db6-11ea-9618-0242ac110016 to disappear May 24 12:00:42.899: INFO: Pod downward-api-2f08e5a1-9db6-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:00:42.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9rp9b" for this suite. May 24 12:00:48.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:00:48.985: INFO: namespace: e2e-tests-downward-api-9rp9b, resource: bindings, ignored listing per whitelist May 24 12:00:48.999: INFO: namespace e2e-tests-downward-api-9rp9b deletion completed in 6.095850777s • [SLOW TEST:10.530 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:00:48.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 12:00:49.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 24 12:00:49.192: INFO: stderr: "" May 24 12:00:49.192: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 24 12:00:49.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tsgrn' May 24 12:00:49.469: INFO: stderr: "" May 24 12:00:49.469: INFO: stdout: "replicationcontroller/redis-master created\n" May 24 12:00:49.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tsgrn' May 24 12:00:49.784: INFO: stderr: "" May 24 12:00:49.784: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 24 12:00:50.833: INFO: Selector matched 1 pods for map[app:redis] May 24 12:00:50.833: INFO: Found 0 / 1 May 24 12:00:51.788: INFO: Selector matched 1 pods for map[app:redis] May 24 12:00:51.788: INFO: Found 0 / 1 May 24 12:00:52.788: INFO: Selector matched 1 pods for map[app:redis] May 24 12:00:52.789: INFO: Found 0 / 1 May 24 12:00:53.788: INFO: Selector matched 1 pods for map[app:redis] May 24 12:00:53.788: INFO: Found 1 / 1 May 24 12:00:53.788: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 24 12:00:53.791: INFO: Selector matched 1 pods for map[app:redis] May 24 12:00:53.791: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 24 12:00:53.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-cn2zh --namespace=e2e-tests-kubectl-tsgrn' May 24 12:00:53.921: INFO: stderr: "" May 24 12:00:53.921: INFO: stdout: "Name: redis-master-cn2zh\nNamespace: e2e-tests-kubectl-tsgrn\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.17.0.3\nStart Time: Sun, 24 May 2020 12:00:49 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.106\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://836ae9c24bd6c2909ebd9cd445939045b82a32a530de43d26bde086938c1e14c\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 24 May 2020 12:00:52 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-z28qt (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-z28qt:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-z28qt\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-tests-kubectl-tsgrn/redis-master-cn2zh to hunter-worker\n Normal Pulled 3s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" May 24 12:00:53.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-tsgrn' May 24 12:00:54.046: INFO: stderr: "" May 24 12:00:54.046: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-tsgrn\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-cn2zh\n" May 24 12:00:54.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-tsgrn' May 24 12:00:54.154: INFO: stderr: "" May 24 12:00:54.154: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-tsgrn\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.105.231.191\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.106:6379\nSession Affinity: None\nEvents: \n" May 24 12:00:54.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 24 12:00:54.301: INFO: stderr: "" May 24 12:00:54.301: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 24 May 2020 12:00:45 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 24 May 2020 12:00:45 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 24 May 2020 12:00:45 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 24 May 2020 12:00:45 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 69d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 69d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 69d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 69d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 69d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 69d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 69d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 24 12:00:54.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-tsgrn' May 24 12:00:54.408: INFO: stderr: "" May 24 12:00:54.408: INFO: stdout: "Name: e2e-tests-kubectl-tsgrn\nLabels: e2e-framework=kubectl\n e2e-run=5142e99c-9dac-11ea-9618-0242ac110016\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:00:54.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tsgrn" for this suite. May 24 12:01:16.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:01:16.455: INFO: namespace: e2e-tests-kubectl-tsgrn, resource: bindings, ignored listing per whitelist May 24 12:01:16.510: INFO: namespace e2e-tests-kubectl-tsgrn deletion completed in 22.097157556s • [SLOW TEST:27.511 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:01:16.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0524 12:01:17.731326 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 12:01:17.731: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:01:17.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-6jlsm" for this suite. May 24 12:01:23.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:01:23.768: INFO: namespace: e2e-tests-gc-6jlsm, resource: bindings, ignored listing per whitelist May 24 12:01:23.836: INFO: namespace e2e-tests-gc-6jlsm deletion completed in 6.101599457s • [SLOW TEST:7.325 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:01:23.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 12:01:23.984: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.706278ms) May 24 12:01:23.986: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.712934ms) May 24 12:01:23.989: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.705286ms) May 24 12:01:23.992: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.102069ms) May 24 12:01:23.995: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.991797ms) May 24 12:01:23.998: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.998002ms) May 24 12:01:24.001: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.174871ms) May 24 12:01:24.005: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.547992ms) May 24 12:01:24.008: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.189346ms) May 24 12:01:24.012: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.647439ms) May 24 12:01:24.016: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.573874ms) May 24 12:01:24.019: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.361638ms) May 24 12:01:24.039: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 20.480934ms) May 24 12:01:24.044: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.007076ms) May 24 12:01:24.047: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.521236ms) May 24 12:01:24.051: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.420183ms) May 24 12:01:24.055: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.042664ms) May 24 12:01:24.059: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.070676ms) May 24 12:01:24.063: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.123245ms) May 24 12:01:24.066: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.416933ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:01:24.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-nhr84" for this suite. May 24 12:01:30.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:01:30.150: INFO: namespace: e2e-tests-proxy-nhr84, resource: bindings, ignored listing per whitelist May 24 12:01:30.158: INFO: namespace e2e-tests-proxy-nhr84 deletion completed in 6.088466659s • [SLOW TEST:6.323 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:01:30.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 24 12:01:30.287: INFO: Waiting up to 5m0s for pod "pod-4dc4021c-9db6-11ea-9618-0242ac110016" in namespace "e2e-tests-emptydir-44bgm" to be "success or failure" May 24 12:01:30.296: INFO: Pod "pod-4dc4021c-9db6-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 8.630341ms May 24 12:01:32.300: INFO: Pod "pod-4dc4021c-9db6-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012967863s May 24 12:01:34.305: INFO: Pod "pod-4dc4021c-9db6-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017703057s STEP: Saw pod success May 24 12:01:34.305: INFO: Pod "pod-4dc4021c-9db6-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:01:34.308: INFO: Trying to get logs from node hunter-worker pod pod-4dc4021c-9db6-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 12:01:34.347: INFO: Waiting for pod pod-4dc4021c-9db6-11ea-9618-0242ac110016 to disappear May 24 12:01:34.357: INFO: Pod pod-4dc4021c-9db6-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:01:34.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-44bgm" for this suite. May 24 12:01:40.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:01:40.384: INFO: namespace: e2e-tests-emptydir-44bgm, resource: bindings, ignored listing per whitelist May 24 12:01:40.453: INFO: namespace e2e-tests-emptydir-44bgm deletion completed in 6.091835629s • [SLOW TEST:10.293 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:01:40.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-53e91f82-9db6-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 12:01:40.574: INFO: Waiting up to 5m0s for pod "pod-configmaps-53e9cbf3-9db6-11ea-9618-0242ac110016" in namespace "e2e-tests-configmap-5mvjr" to be "success or failure" May 24 12:01:40.594: INFO: Pod "pod-configmaps-53e9cbf3-9db6-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 20.21675ms May 24 12:01:42.599: INFO: Pod "pod-configmaps-53e9cbf3-9db6-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024592182s May 24 12:01:44.603: INFO: Pod "pod-configmaps-53e9cbf3-9db6-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0292151s STEP: Saw pod success May 24 12:01:44.603: INFO: Pod "pod-configmaps-53e9cbf3-9db6-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:01:44.606: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-53e9cbf3-9db6-11ea-9618-0242ac110016 container configmap-volume-test: STEP: delete the pod May 24 12:01:44.627: INFO: Waiting for pod pod-configmaps-53e9cbf3-9db6-11ea-9618-0242ac110016 to disappear May 24 12:01:44.631: INFO: Pod pod-configmaps-53e9cbf3-9db6-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:01:44.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5mvjr" for this suite. May 24 12:01:50.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:01:50.728: INFO: namespace: e2e-tests-configmap-5mvjr, resource: bindings, ignored listing per whitelist May 24 12:01:50.774: INFO: namespace e2e-tests-configmap-5mvjr deletion completed in 6.134743961s • [SLOW TEST:10.321 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:01:50.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-5mtrj May 24 12:01:54.990: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-5mtrj STEP: checking the pod's current state and verifying that restartCount is present May 24 12:01:54.994: INFO: Initial restart count of pod liveness-exec is 0 May 24 12:02:43.097: INFO: Restart count of pod e2e-tests-container-probe-5mtrj/liveness-exec is now 1 (48.102851598s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:02:43.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5mtrj" for this suite. May 24 12:02:49.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:02:49.207: INFO: namespace: e2e-tests-container-probe-5mtrj, resource: bindings, ignored listing per whitelist May 24 12:02:49.246: INFO: namespace e2e-tests-container-probe-5mtrj deletion completed in 6.133969186s • [SLOW TEST:58.472 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:02:49.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 12:02:49.359: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:02:55.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-trjpm" for this suite. May 24 12:03:35.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:03:35.644: INFO: namespace: e2e-tests-pods-trjpm, resource: bindings, ignored listing per whitelist May 24 12:03:35.736: INFO: namespace e2e-tests-pods-trjpm deletion completed in 40.245480446s • [SLOW TEST:46.489 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:03:35.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 24 12:03:40.401: INFO: Successfully updated pod "annotationupdate98a25b29-9db6-11ea-9618-0242ac110016" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:03:44.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jwdv9" for this suite. May 24 12:04:06.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:04:06.552: INFO: namespace: e2e-tests-downward-api-jwdv9, resource: bindings, ignored listing per whitelist May 24 12:04:06.569: INFO: namespace e2e-tests-downward-api-jwdv9 deletion completed in 22.123421434s • [SLOW TEST:30.832 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:04:06.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-aafed741-9db6-11ea-9618-0242ac110016 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:04:12.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dn6tj" for this suite. May 24 12:04:34.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:04:34.794: INFO: namespace: e2e-tests-configmap-dn6tj, resource: bindings, ignored listing per whitelist May 24 12:04:34.840: INFO: namespace e2e-tests-configmap-dn6tj deletion completed in 22.097608656s • [SLOW TEST:28.271 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:04:34.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 24 12:04:35.004: INFO: Waiting up to 5m0s for pod "pod-bbdffcb6-9db6-11ea-9618-0242ac110016" in namespace "e2e-tests-emptydir-sjkcc" to be "success or failure" May 24 12:04:35.008: INFO: Pod "pod-bbdffcb6-9db6-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.852332ms May 24 12:04:37.012: INFO: Pod "pod-bbdffcb6-9db6-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008003113s May 24 12:04:39.016: INFO: Pod "pod-bbdffcb6-9db6-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012164122s STEP: Saw pod success May 24 12:04:39.016: INFO: Pod "pod-bbdffcb6-9db6-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:04:39.019: INFO: Trying to get logs from node hunter-worker pod pod-bbdffcb6-9db6-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 12:04:39.062: INFO: Waiting for pod pod-bbdffcb6-9db6-11ea-9618-0242ac110016 to disappear May 24 12:04:39.072: INFO: Pod pod-bbdffcb6-9db6-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:04:39.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sjkcc" for this suite. May 24 12:04:45.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:04:45.114: INFO: namespace: e2e-tests-emptydir-sjkcc, resource: bindings, ignored listing per whitelist May 24 12:04:45.181: INFO: namespace e2e-tests-emptydir-sjkcc deletion completed in 6.104613365s • [SLOW TEST:10.341 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:04:45.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 12:04:45.283: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:04:49.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-zh5xk" for this suite. May 24 12:05:35.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:05:35.478: INFO: namespace: e2e-tests-pods-zh5xk, resource: bindings, ignored listing per whitelist May 24 12:05:35.504: INFO: namespace e2e-tests-pods-zh5xk deletion completed in 46.108980984s • [SLOW TEST:50.322 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:05:35.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-e00106cb-9db6-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 12:05:35.621: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e0033903-9db6-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-g6v5r" to be "success or failure" May 24 12:05:35.625: INFO: Pod "pod-projected-secrets-e0033903-9db6-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.636223ms May 24 12:05:37.928: INFO: Pod "pod-projected-secrets-e0033903-9db6-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.306668135s May 24 12:05:39.931: INFO: Pod "pod-projected-secrets-e0033903-9db6-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.309460595s STEP: Saw pod success May 24 12:05:39.931: INFO: Pod "pod-projected-secrets-e0033903-9db6-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:05:39.932: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-e0033903-9db6-11ea-9618-0242ac110016 container secret-volume-test: STEP: delete the pod May 24 12:05:39.945: INFO: Waiting for pod pod-projected-secrets-e0033903-9db6-11ea-9618-0242ac110016 to disappear May 24 12:05:39.949: INFO: Pod pod-projected-secrets-e0033903-9db6-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:05:39.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-g6v5r" for this suite. May 24 12:05:45.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:05:46.026: INFO: namespace: e2e-tests-projected-g6v5r, resource: bindings, ignored listing per whitelist May 24 12:05:46.056: INFO: namespace e2e-tests-projected-g6v5r deletion completed in 6.103714727s • [SLOW TEST:10.552 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:05:46.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 24 12:05:54.268: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 12:05:54.278: INFO: Pod pod-with-prestop-http-hook still exists May 24 12:05:56.278: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 12:05:56.283: INFO: Pod pod-with-prestop-http-hook still exists May 24 12:05:58.278: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 12:05:58.282: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:05:58.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-hbvcr" for this suite. May 24 12:06:20.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:06:20.382: INFO: namespace: e2e-tests-container-lifecycle-hook-hbvcr, resource: bindings, ignored listing per whitelist May 24 12:06:20.432: INFO: namespace e2e-tests-container-lifecycle-hook-hbvcr deletion completed in 22.140068334s • [SLOW TEST:34.376 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:06:20.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 24 12:06:20.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-4s8st run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 24 12:06:26.770: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0524 12:06:26.709467 4455 log.go:172] (0xc0006c6370) (0xc0007fe140) Create stream\nI0524 12:06:26.709500 4455 log.go:172] (0xc0006c6370) (0xc0007fe140) Stream added, broadcasting: 1\nI0524 12:06:26.711834 4455 log.go:172] (0xc0006c6370) Reply frame received for 1\nI0524 12:06:26.711875 4455 log.go:172] (0xc0006c6370) (0xc0007fe1e0) Create stream\nI0524 12:06:26.711884 4455 log.go:172] (0xc0006c6370) (0xc0007fe1e0) Stream added, broadcasting: 3\nI0524 12:06:26.712758 4455 log.go:172] (0xc0006c6370) Reply frame received for 3\nI0524 12:06:26.712797 4455 log.go:172] (0xc0006c6370) (0xc000a76000) Create stream\nI0524 12:06:26.712810 4455 log.go:172] (0xc0006c6370) (0xc000a76000) Stream added, broadcasting: 5\nI0524 12:06:26.714093 4455 log.go:172] (0xc0006c6370) Reply frame received for 5\nI0524 12:06:26.714128 4455 log.go:172] (0xc0006c6370) (0xc0007fe280) Create stream\nI0524 12:06:26.714139 4455 log.go:172] (0xc0006c6370) (0xc0007fe280) Stream added, broadcasting: 7\nI0524 12:06:26.715140 4455 log.go:172] (0xc0006c6370) Reply frame received for 7\nI0524 12:06:26.715307 4455 log.go:172] (0xc0007fe1e0) (3) Writing data frame\nI0524 12:06:26.715451 4455 log.go:172] (0xc0007fe1e0) (3) Writing data frame\nI0524 12:06:26.716549 4455 log.go:172] (0xc0006c6370) Data frame received for 5\nI0524 12:06:26.716584 4455 log.go:172] (0xc000a76000) (5) Data frame handling\nI0524 12:06:26.716609 4455 log.go:172] (0xc000a76000) (5) Data frame sent\nI0524 12:06:26.717394 4455 log.go:172] (0xc0006c6370) Data frame received for 5\nI0524 12:06:26.717420 4455 log.go:172] (0xc000a76000) (5) Data frame handling\nI0524 12:06:26.717434 4455 log.go:172] (0xc000a76000) (5) Data frame sent\nI0524 12:06:26.746822 4455 log.go:172] (0xc0006c6370) Data frame received for 7\nI0524 12:06:26.746942 4455 log.go:172] (0xc0007fe280) (7) Data frame handling\nI0524 12:06:26.747024 4455 log.go:172] (0xc0006c6370) Data frame received for 5\nI0524 12:06:26.747127 4455 log.go:172] (0xc000a76000) (5) Data frame handling\nI0524 12:06:26.747206 4455 log.go:172] (0xc0006c6370) Data frame received for 1\nI0524 12:06:26.747242 4455 log.go:172] (0xc0006c6370) (0xc0007fe1e0) Stream removed, broadcasting: 3\nI0524 12:06:26.747285 4455 log.go:172] (0xc0007fe140) (1) Data frame handling\nI0524 12:06:26.747328 4455 log.go:172] (0xc0007fe140) (1) Data frame sent\nI0524 12:06:26.747365 4455 log.go:172] (0xc0006c6370) (0xc0007fe140) Stream removed, broadcasting: 1\nI0524 12:06:26.747399 4455 log.go:172] (0xc0006c6370) Go away received\nI0524 12:06:26.747474 4455 log.go:172] (0xc0006c6370) (0xc0007fe140) Stream removed, broadcasting: 1\nI0524 12:06:26.747491 4455 log.go:172] (0xc0006c6370) (0xc0007fe1e0) Stream removed, broadcasting: 3\nI0524 12:06:26.747501 4455 log.go:172] (0xc0006c6370) (0xc000a76000) Stream removed, broadcasting: 5\nI0524 12:06:26.747513 4455 log.go:172] (0xc0006c6370) (0xc0007fe280) Stream removed, broadcasting: 7\n" May 24 12:06:26.771: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:06:28.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4s8st" for this suite. May 24 12:06:34.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:06:34.881: INFO: namespace: e2e-tests-kubectl-4s8st, resource: bindings, ignored listing per whitelist May 24 12:06:34.897: INFO: namespace e2e-tests-kubectl-4s8st deletion completed in 6.116237737s • [SLOW TEST:14.465 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:06:34.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 12:06:34.992: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0364fc37-9db7-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-xfx2g" to be "success or failure" May 24 12:06:35.004: INFO: Pod "downwardapi-volume-0364fc37-9db7-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 11.609202ms May 24 12:06:37.008: INFO: Pod "downwardapi-volume-0364fc37-9db7-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015564969s May 24 12:06:39.012: INFO: Pod "downwardapi-volume-0364fc37-9db7-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020150643s STEP: Saw pod success May 24 12:06:39.012: INFO: Pod "downwardapi-volume-0364fc37-9db7-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:06:39.016: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-0364fc37-9db7-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 12:06:39.043: INFO: Waiting for pod downwardapi-volume-0364fc37-9db7-11ea-9618-0242ac110016 to disappear May 24 12:06:39.182: INFO: Pod downwardapi-volume-0364fc37-9db7-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:06:39.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xfx2g" for this suite. May 24 12:06:45.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:06:45.452: INFO: namespace: e2e-tests-projected-xfx2g, resource: bindings, ignored listing per whitelist May 24 12:06:45.454: INFO: namespace e2e-tests-projected-xfx2g deletion completed in 6.267438154s • [SLOW TEST:10.556 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:06:45.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 12:06:45.614: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09b71007-9db7-11ea-9618-0242ac110016" in namespace "e2e-tests-downward-api-nkgdn" to be "success or failure" May 24 12:06:45.655: INFO: Pod "downwardapi-volume-09b71007-9db7-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 41.210925ms May 24 12:06:47.746: INFO: Pod "downwardapi-volume-09b71007-9db7-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132255646s May 24 12:06:49.750: INFO: Pod "downwardapi-volume-09b71007-9db7-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136643452s STEP: Saw pod success May 24 12:06:49.750: INFO: Pod "downwardapi-volume-09b71007-9db7-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:06:49.754: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-09b71007-9db7-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 12:06:49.775: INFO: Waiting for pod downwardapi-volume-09b71007-9db7-11ea-9618-0242ac110016 to disappear May 24 12:06:49.779: INFO: Pod downwardapi-volume-09b71007-9db7-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:06:49.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nkgdn" for this suite. May 24 12:06:55.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:06:55.919: INFO: namespace: e2e-tests-downward-api-nkgdn, resource: bindings, ignored listing per whitelist May 24 12:06:55.919: INFO: namespace e2e-tests-downward-api-nkgdn deletion completed in 6.137026207s • [SLOW TEST:10.465 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:06:55.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-0fedf0d9-9db7-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 12:06:56.063: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0ff5d31b-9db7-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-vgtxs" to be "success or failure" May 24 12:06:56.090: INFO: Pod "pod-projected-configmaps-0ff5d31b-9db7-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 27.327361ms May 24 12:06:58.094: INFO: Pod "pod-projected-configmaps-0ff5d31b-9db7-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0310423s May 24 12:07:00.099: INFO: Pod "pod-projected-configmaps-0ff5d31b-9db7-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035610567s STEP: Saw pod success May 24 12:07:00.099: INFO: Pod "pod-projected-configmaps-0ff5d31b-9db7-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:07:00.102: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-0ff5d31b-9db7-11ea-9618-0242ac110016 container projected-configmap-volume-test: STEP: delete the pod May 24 12:07:00.144: INFO: Waiting for pod pod-projected-configmaps-0ff5d31b-9db7-11ea-9618-0242ac110016 to disappear May 24 12:07:00.172: INFO: Pod pod-projected-configmaps-0ff5d31b-9db7-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:07:00.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vgtxs" for this suite. May 24 12:07:06.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:07:06.263: INFO: namespace: e2e-tests-projected-vgtxs, resource: bindings, ignored listing per whitelist May 24 12:07:06.295: INFO: namespace e2e-tests-projected-vgtxs deletion completed in 6.119914696s • [SLOW TEST:10.375 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:07:06.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 24 12:07:10.451: INFO: Pod pod-hostip-161db3a1-9db7-11ea-9618-0242ac110016 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:07:10.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jsk7t" for this suite. May 24 12:07:32.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:07:32.527: INFO: namespace: e2e-tests-pods-jsk7t, resource: bindings, ignored listing per whitelist May 24 12:07:32.546: INFO: namespace e2e-tests-pods-jsk7t deletion completed in 22.090415239s • [SLOW TEST:26.251 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:07:32.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:07:32.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bvw9x" for this suite. May 24 12:07:54.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:07:54.784: INFO: namespace: e2e-tests-pods-bvw9x, resource: bindings, ignored listing per whitelist May 24 12:07:54.817: INFO: namespace e2e-tests-pods-bvw9x deletion completed in 22.129497889s • [SLOW TEST:22.271 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:07:54.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-s77d2 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 24 12:07:54.971: INFO: Found 0 stateful pods, waiting for 3 May 24 12:08:04.986: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 12:08:04.986: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 12:08:04.986: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 24 12:08:14.976: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 12:08:14.976: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 12:08:14.976: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 24 12:08:15.001: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 24 12:08:25.094: INFO: Updating stateful set ss2 May 24 12:08:25.108: INFO: Waiting for Pod e2e-tests-statefulset-s77d2/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 24 12:08:35.223: INFO: Found 2 stateful pods, waiting for 3 May 24 12:08:45.227: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 12:08:45.227: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 12:08:45.227: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 24 12:08:45.251: INFO: Updating stateful set ss2 May 24 12:08:45.282: INFO: Waiting for Pod e2e-tests-statefulset-s77d2/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 24 12:08:55.309: INFO: Updating stateful set ss2 May 24 12:08:55.318: INFO: Waiting for StatefulSet e2e-tests-statefulset-s77d2/ss2 to complete update May 24 12:08:55.318: INFO: Waiting for Pod e2e-tests-statefulset-s77d2/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 24 12:09:05.326: INFO: Deleting all statefulset in ns e2e-tests-statefulset-s77d2 May 24 12:09:05.330: INFO: Scaling statefulset ss2 to 0 May 24 12:09:25.373: INFO: Waiting for statefulset status.replicas updated to 0 May 24 12:09:25.376: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:09:25.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-s77d2" for this suite. May 24 12:09:31.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:09:31.517: INFO: namespace: e2e-tests-statefulset-s77d2, resource: bindings, ignored listing per whitelist May 24 12:09:31.568: INFO: namespace e2e-tests-statefulset-s77d2 deletion completed in 6.175347412s • [SLOW TEST:96.750 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:09:31.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 24 12:09:31.709: INFO: Pod name pod-release: Found 0 pods out of 1 May 24 12:09:36.713: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:09:37.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-zrk49" for this suite. May 24 12:09:43.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:09:43.798: INFO: namespace: e2e-tests-replication-controller-zrk49, resource: bindings, ignored listing per whitelist May 24 12:09:43.820: INFO: namespace e2e-tests-replication-controller-zrk49 deletion completed in 6.089816116s • [SLOW TEST:12.252 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:09:43.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-7413b085-9db7-11ea-9618-0242ac110016 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-7413b085-9db7-11ea-9618-0242ac110016 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:09:50.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ng4s2" for this suite. May 24 12:10:12.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:10:12.238: INFO: namespace: e2e-tests-projected-ng4s2, resource: bindings, ignored listing per whitelist May 24 12:10:12.277: INFO: namespace e2e-tests-projected-ng4s2 deletion completed in 22.10940195s • [SLOW TEST:28.457 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:10:12.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-lf9jp May 24 12:10:16.414: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-lf9jp STEP: checking the pod's current state and verifying that restartCount is present May 24 12:10:16.417: INFO: Initial restart count of pod liveness-http is 0 May 24 12:10:36.464: INFO: Restart count of pod e2e-tests-container-probe-lf9jp/liveness-http is now 1 (20.046476688s elapsed) May 24 12:10:54.501: INFO: Restart count of pod e2e-tests-container-probe-lf9jp/liveness-http is now 2 (38.083702082s elapsed) May 24 12:11:14.546: INFO: Restart count of pod e2e-tests-container-probe-lf9jp/liveness-http is now 3 (58.128880641s elapsed) May 24 12:11:34.636: INFO: Restart count of pod e2e-tests-container-probe-lf9jp/liveness-http is now 4 (1m18.218604753s elapsed) May 24 12:11:56.681: INFO: Restart count of pod e2e-tests-container-probe-lf9jp/liveness-http is now 5 (1m40.263790905s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:11:56.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-lf9jp" for this suite. May 24 12:12:02.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:12:02.753: INFO: namespace: e2e-tests-container-probe-lf9jp, resource: bindings, ignored listing per whitelist May 24 12:12:02.804: INFO: namespace e2e-tests-container-probe-lf9jp deletion completed in 6.098666613s • [SLOW TEST:110.526 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:12:02.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 24 12:12:02.905: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 12:12:02.963: INFO: Waiting for terminating namespaces to be deleted... May 24 12:12:02.966: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 24 12:12:02.973: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 24 12:12:02.973: INFO: Container kube-proxy ready: true, restart count 0 May 24 12:12:02.973: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 24 12:12:02.973: INFO: Container kindnet-cni ready: true, restart count 0 May 24 12:12:02.973: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 24 12:12:02.974: INFO: Container coredns ready: true, restart count 0 May 24 12:12:02.974: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 24 12:12:02.981: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 24 12:12:02.981: INFO: Container kindnet-cni ready: true, restart count 0 May 24 12:12:02.981: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 24 12:12:02.981: INFO: Container coredns ready: true, restart count 0 May 24 12:12:02.981: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 24 12:12:02.981: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 24 12:12:03.105: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 24 12:12:03.105: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 24 12:12:03.105: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 24 12:12:03.105: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 24 12:12:03.105: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 24 12:12:03.105: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-c6f9d343-9db7-11ea-9618-0242ac110016.1611f5ba1bed0bfb], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-kspbl/filler-pod-c6f9d343-9db7-11ea-9618-0242ac110016 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-c6f9d343-9db7-11ea-9618-0242ac110016.1611f5ba6dbb2159], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c6f9d343-9db7-11ea-9618-0242ac110016.1611f5bac80f574f], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-c6f9d343-9db7-11ea-9618-0242ac110016.1611f5bad8e8675e], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-c6faaccb-9db7-11ea-9618-0242ac110016.1611f5ba1d080be7], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-kspbl/filler-pod-c6faaccb-9db7-11ea-9618-0242ac110016 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-c6faaccb-9db7-11ea-9618-0242ac110016.1611f5ba9f7ba2c1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c6faaccb-9db7-11ea-9618-0242ac110016.1611f5badc8d4e09], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-c6faaccb-9db7-11ea-9618-0242ac110016.1611f5baeb86dd46], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.1611f5bb852e92e6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:12:10.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-kspbl" for this suite. May 24 12:12:16.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:12:16.366: INFO: namespace: e2e-tests-sched-pred-kspbl, resource: bindings, ignored listing per whitelist May 24 12:12:16.413: INFO: namespace e2e-tests-sched-pred-kspbl deletion completed in 6.094875654s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:13.609 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:12:16.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-kz4rx STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-kz4rx STEP: Deleting pre-stop pod May 24 12:12:29.780: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:12:29.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-kz4rx" for this suite. May 24 12:13:07.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:13:07.851: INFO: namespace: e2e-tests-prestop-kz4rx, resource: bindings, ignored listing per whitelist May 24 12:13:07.903: INFO: namespace e2e-tests-prestop-kz4rx deletion completed in 38.090810392s • [SLOW TEST:51.489 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:13:07.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 24 12:13:08.064: INFO: Waiting up to 5m0s for pod "downward-api-edac7421-9db7-11ea-9618-0242ac110016" in namespace "e2e-tests-downward-api-jr22s" to be "success or failure" May 24 12:13:08.067: INFO: Pod "downward-api-edac7421-9db7-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.356972ms May 24 12:13:10.071: INFO: Pod "downward-api-edac7421-9db7-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007869698s May 24 12:13:12.076: INFO: Pod "downward-api-edac7421-9db7-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012419987s STEP: Saw pod success May 24 12:13:12.076: INFO: Pod "downward-api-edac7421-9db7-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:13:12.080: INFO: Trying to get logs from node hunter-worker2 pod downward-api-edac7421-9db7-11ea-9618-0242ac110016 container dapi-container: STEP: delete the pod May 24 12:13:12.136: INFO: Waiting for pod downward-api-edac7421-9db7-11ea-9618-0242ac110016 to disappear May 24 12:13:12.151: INFO: Pod downward-api-edac7421-9db7-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:13:12.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jr22s" for this suite. May 24 12:13:18.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:13:18.207: INFO: namespace: e2e-tests-downward-api-jr22s, resource: bindings, ignored listing per whitelist May 24 12:13:18.248: INFO: namespace e2e-tests-downward-api-jr22s deletion completed in 6.093691923s • [SLOW TEST:10.345 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:13:18.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 24 12:13:19.146: INFO: Pod name wrapped-volume-race-f44420ae-9db7-11ea-9618-0242ac110016: Found 0 pods out of 5 May 24 12:13:24.158: INFO: Pod name wrapped-volume-race-f44420ae-9db7-11ea-9618-0242ac110016: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f44420ae-9db7-11ea-9618-0242ac110016 in namespace e2e-tests-emptydir-wrapper-mvg82, will wait for the garbage collector to delete the pods May 24 12:15:26.247: INFO: Deleting ReplicationController wrapped-volume-race-f44420ae-9db7-11ea-9618-0242ac110016 took: 7.368617ms May 24 12:15:26.348: INFO: Terminating ReplicationController wrapped-volume-race-f44420ae-9db7-11ea-9618-0242ac110016 pods took: 100.300222ms STEP: Creating RC which spawns configmap-volume pods May 24 12:16:11.500: INFO: Pod name wrapped-volume-race-5affe7af-9db8-11ea-9618-0242ac110016: Found 0 pods out of 5 May 24 12:16:16.509: INFO: Pod name wrapped-volume-race-5affe7af-9db8-11ea-9618-0242ac110016: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5affe7af-9db8-11ea-9618-0242ac110016 in namespace e2e-tests-emptydir-wrapper-mvg82, will wait for the garbage collector to delete the pods May 24 12:18:10.592: INFO: Deleting ReplicationController wrapped-volume-race-5affe7af-9db8-11ea-9618-0242ac110016 took: 6.389465ms May 24 12:18:10.693: INFO: Terminating ReplicationController wrapped-volume-race-5affe7af-9db8-11ea-9618-0242ac110016 pods took: 100.316614ms STEP: Creating RC which spawns configmap-volume pods May 24 12:18:52.382: INFO: Pod name wrapped-volume-race-badee702-9db8-11ea-9618-0242ac110016: Found 0 pods out of 5 May 24 12:18:57.390: INFO: Pod name wrapped-volume-race-badee702-9db8-11ea-9618-0242ac110016: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-badee702-9db8-11ea-9618-0242ac110016 in namespace e2e-tests-emptydir-wrapper-mvg82, will wait for the garbage collector to delete the pods May 24 12:21:33.472: INFO: Deleting ReplicationController wrapped-volume-race-badee702-9db8-11ea-9618-0242ac110016 took: 7.477211ms May 24 12:21:33.672: INFO: Terminating ReplicationController wrapped-volume-race-badee702-9db8-11ea-9618-0242ac110016 pods took: 200.353483ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:22:11.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-mvg82" for this suite. May 24 12:22:19.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:22:20.042: INFO: namespace: e2e-tests-emptydir-wrapper-mvg82, resource: bindings, ignored listing per whitelist May 24 12:22:20.074: INFO: namespace e2e-tests-emptydir-wrapper-mvg82 deletion completed in 8.094761601s • [SLOW TEST:541.826 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:22:20.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 24 12:22:20.204: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:22:27.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-srdnw" for this suite. May 24 12:22:51.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:22:51.887: INFO: namespace: e2e-tests-init-container-srdnw, resource: bindings, ignored listing per whitelist May 24 12:22:51.910: INFO: namespace e2e-tests-init-container-srdnw deletion completed in 24.130993113s • [SLOW TEST:31.835 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:22:51.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 12:22:56.131: INFO: Waiting up to 5m0s for pod "client-envvars-4c330d11-9db9-11ea-9618-0242ac110016" in namespace "e2e-tests-pods-wrg5q" to be "success or failure" May 24 12:22:56.172: INFO: Pod "client-envvars-4c330d11-9db9-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 41.041184ms May 24 12:22:58.178: INFO: Pod "client-envvars-4c330d11-9db9-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046760002s May 24 12:23:00.182: INFO: Pod "client-envvars-4c330d11-9db9-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051273282s STEP: Saw pod success May 24 12:23:00.182: INFO: Pod "client-envvars-4c330d11-9db9-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:23:00.185: INFO: Trying to get logs from node hunter-worker pod client-envvars-4c330d11-9db9-11ea-9618-0242ac110016 container env3cont: STEP: delete the pod May 24 12:23:00.227: INFO: Waiting for pod client-envvars-4c330d11-9db9-11ea-9618-0242ac110016 to disappear May 24 12:23:00.239: INFO: Pod client-envvars-4c330d11-9db9-11ea-9618-0242ac110016 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:23:00.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wrg5q" for this suite. May 24 12:23:38.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:23:38.366: INFO: namespace: e2e-tests-pods-wrg5q, resource: bindings, ignored listing per whitelist May 24 12:23:38.382: INFO: namespace e2e-tests-pods-wrg5q deletion completed in 38.139047631s • [SLOW TEST:46.471 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:23:38.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-65772a49-9db9-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 12:23:38.516: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6577a43f-9db9-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-2tddp" to be "success or failure" May 24 12:23:38.544: INFO: Pod "pod-projected-configmaps-6577a43f-9db9-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 28.269707ms May 24 12:23:40.548: INFO: Pod "pod-projected-configmaps-6577a43f-9db9-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032451052s May 24 12:23:42.553: INFO: Pod "pod-projected-configmaps-6577a43f-9db9-11ea-9618-0242ac110016": Phase="Running", Reason="", readiness=true. Elapsed: 4.037264129s May 24 12:23:44.557: INFO: Pod "pod-projected-configmaps-6577a43f-9db9-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041404124s STEP: Saw pod success May 24 12:23:44.557: INFO: Pod "pod-projected-configmaps-6577a43f-9db9-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:23:44.560: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-6577a43f-9db9-11ea-9618-0242ac110016 container projected-configmap-volume-test: STEP: delete the pod May 24 12:23:44.600: INFO: Waiting for pod pod-projected-configmaps-6577a43f-9db9-11ea-9618-0242ac110016 to disappear May 24 12:23:44.633: INFO: Pod pod-projected-configmaps-6577a43f-9db9-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:23:44.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2tddp" for this suite. May 24 12:23:50.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:23:50.670: INFO: namespace: e2e-tests-projected-2tddp, resource: bindings, ignored listing per whitelist May 24 12:23:50.759: INFO: namespace e2e-tests-projected-2tddp deletion completed in 6.122037438s • [SLOW TEST:12.377 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:23:50.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-4fb4d May 24 12:23:54.958: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-4fb4d STEP: checking the pod's current state and verifying that restartCount is present May 24 12:23:54.961: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:27:55.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-4fb4d" for this suite. May 24 12:28:01.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:28:01.864: INFO: namespace: e2e-tests-container-probe-4fb4d, resource: bindings, ignored listing per whitelist May 24 12:28:01.906: INFO: namespace e2e-tests-container-probe-4fb4d deletion completed in 6.092470009s • [SLOW TEST:251.147 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:28:01.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 24 12:28:02.066: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lqbtn,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqbtn/configmaps/e2e-watch-test-watch-closed,UID:02878f64-9dba-11ea-99e8-0242ac110002,ResourceVersion:12274773,Generation:0,CreationTimestamp:2020-05-24 12:28:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 24 12:28:02.066: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lqbtn,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqbtn/configmaps/e2e-watch-test-watch-closed,UID:02878f64-9dba-11ea-99e8-0242ac110002,ResourceVersion:12274774,Generation:0,CreationTimestamp:2020-05-24 12:28:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 24 12:28:02.100: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lqbtn,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqbtn/configmaps/e2e-watch-test-watch-closed,UID:02878f64-9dba-11ea-99e8-0242ac110002,ResourceVersion:12274775,Generation:0,CreationTimestamp:2020-05-24 12:28:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 24 12:28:02.100: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lqbtn,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqbtn/configmaps/e2e-watch-test-watch-closed,UID:02878f64-9dba-11ea-99e8-0242ac110002,ResourceVersion:12274776,Generation:0,CreationTimestamp:2020-05-24 12:28:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:28:02.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-lqbtn" for this suite. May 24 12:28:08.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:28:08.210: INFO: namespace: e2e-tests-watch-lqbtn, resource: bindings, ignored listing per whitelist May 24 12:28:08.219: INFO: namespace e2e-tests-watch-lqbtn deletion completed in 6.09094464s • [SLOW TEST:6.313 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:28:08.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-065318f7-9dba-11ea-9618-0242ac110016 STEP: Creating secret with name s-test-opt-upd-0653195f-9dba-11ea-9618-0242ac110016 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-065318f7-9dba-11ea-9618-0242ac110016 STEP: Updating secret s-test-opt-upd-0653195f-9dba-11ea-9618-0242ac110016 STEP: Creating secret with name s-test-opt-create-06531986-9dba-11ea-9618-0242ac110016 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:29:32.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d4g62" for this suite. May 24 12:29:54.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:29:54.943: INFO: namespace: e2e-tests-projected-d4g62, resource: bindings, ignored listing per whitelist May 24 12:29:54.990: INFO: namespace e2e-tests-projected-d4g62 deletion completed in 22.08004728s • [SLOW TEST:106.770 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:29:54.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 24 12:29:55.098: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g4gm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4gm6/configmaps/e2e-watch-test-configmap-a,UID:45ed0616-9dba-11ea-99e8-0242ac110002,ResourceVersion:12275046,Generation:0,CreationTimestamp:2020-05-24 12:29:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 24 12:29:55.098: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g4gm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4gm6/configmaps/e2e-watch-test-configmap-a,UID:45ed0616-9dba-11ea-99e8-0242ac110002,ResourceVersion:12275046,Generation:0,CreationTimestamp:2020-05-24 12:29:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 24 12:30:05.106: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g4gm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4gm6/configmaps/e2e-watch-test-configmap-a,UID:45ed0616-9dba-11ea-99e8-0242ac110002,ResourceVersion:12275066,Generation:0,CreationTimestamp:2020-05-24 12:29:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 24 12:30:05.106: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g4gm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4gm6/configmaps/e2e-watch-test-configmap-a,UID:45ed0616-9dba-11ea-99e8-0242ac110002,ResourceVersion:12275066,Generation:0,CreationTimestamp:2020-05-24 12:29:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 24 12:30:15.115: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g4gm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4gm6/configmaps/e2e-watch-test-configmap-a,UID:45ed0616-9dba-11ea-99e8-0242ac110002,ResourceVersion:12275086,Generation:0,CreationTimestamp:2020-05-24 12:29:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 24 12:30:15.116: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g4gm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4gm6/configmaps/e2e-watch-test-configmap-a,UID:45ed0616-9dba-11ea-99e8-0242ac110002,ResourceVersion:12275086,Generation:0,CreationTimestamp:2020-05-24 12:29:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 24 12:30:25.122: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g4gm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4gm6/configmaps/e2e-watch-test-configmap-a,UID:45ed0616-9dba-11ea-99e8-0242ac110002,ResourceVersion:12275106,Generation:0,CreationTimestamp:2020-05-24 12:29:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 24 12:30:25.123: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-g4gm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4gm6/configmaps/e2e-watch-test-configmap-a,UID:45ed0616-9dba-11ea-99e8-0242ac110002,ResourceVersion:12275106,Generation:0,CreationTimestamp:2020-05-24 12:29:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 24 12:30:35.130: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-g4gm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4gm6/configmaps/e2e-watch-test-configmap-b,UID:5dca85cf-9dba-11ea-99e8-0242ac110002,ResourceVersion:12275126,Generation:0,CreationTimestamp:2020-05-24 12:30:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 24 12:30:35.130: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-g4gm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4gm6/configmaps/e2e-watch-test-configmap-b,UID:5dca85cf-9dba-11ea-99e8-0242ac110002,ResourceVersion:12275126,Generation:0,CreationTimestamp:2020-05-24 12:30:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 24 12:30:45.138: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-g4gm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4gm6/configmaps/e2e-watch-test-configmap-b,UID:5dca85cf-9dba-11ea-99e8-0242ac110002,ResourceVersion:12275146,Generation:0,CreationTimestamp:2020-05-24 12:30:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 24 12:30:45.138: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-g4gm6,SelfLink:/api/v1/namespaces/e2e-tests-watch-g4gm6/configmaps/e2e-watch-test-configmap-b,UID:5dca85cf-9dba-11ea-99e8-0242ac110002,ResourceVersion:12275146,Generation:0,CreationTimestamp:2020-05-24 12:30:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:30:55.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-g4gm6" for this suite. May 24 12:31:01.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:31:01.197: INFO: namespace: e2e-tests-watch-g4gm6, resource: bindings, ignored listing per whitelist May 24 12:31:01.264: INFO: namespace e2e-tests-watch-g4gm6 deletion completed in 6.120804716s • [SLOW TEST:66.274 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:31:01.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 24 12:31:01.374: INFO: Waiting up to 5m0s for pod "pod-6d6d8ee8-9dba-11ea-9618-0242ac110016" in namespace "e2e-tests-emptydir-fkgfz" to be "success or failure" May 24 12:31:01.378: INFO: Pod "pod-6d6d8ee8-9dba-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.705702ms May 24 12:31:03.394: INFO: Pod "pod-6d6d8ee8-9dba-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02023272s May 24 12:31:05.399: INFO: Pod "pod-6d6d8ee8-9dba-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024782864s STEP: Saw pod success May 24 12:31:05.399: INFO: Pod "pod-6d6d8ee8-9dba-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:31:05.402: INFO: Trying to get logs from node hunter-worker pod pod-6d6d8ee8-9dba-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 12:31:05.444: INFO: Waiting for pod pod-6d6d8ee8-9dba-11ea-9618-0242ac110016 to disappear May 24 12:31:05.456: INFO: Pod pod-6d6d8ee8-9dba-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:31:05.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fkgfz" for this suite. May 24 12:31:11.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:31:11.504: INFO: namespace: e2e-tests-emptydir-fkgfz, resource: bindings, ignored listing per whitelist May 24 12:31:11.588: INFO: namespace e2e-tests-emptydir-fkgfz deletion completed in 6.129019487s • [SLOW TEST:10.324 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:31:11.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 12:31:11.758: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 24 12:31:11.788: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:11.790: INFO: Number of nodes with available pods: 0 May 24 12:31:11.790: INFO: Node hunter-worker is running more than one daemon pod May 24 12:31:12.796: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:12.800: INFO: Number of nodes with available pods: 0 May 24 12:31:12.800: INFO: Node hunter-worker is running more than one daemon pod May 24 12:31:14.264: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:14.268: INFO: Number of nodes with available pods: 0 May 24 12:31:14.268: INFO: Node hunter-worker is running more than one daemon pod May 24 12:31:14.947: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:14.950: INFO: Number of nodes with available pods: 0 May 24 12:31:14.950: INFO: Node hunter-worker is running more than one daemon pod May 24 12:31:15.795: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:15.799: INFO: Number of nodes with available pods: 1 May 24 12:31:15.799: INFO: Node hunter-worker2 is running more than one daemon pod May 24 12:31:16.796: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:16.801: INFO: Number of nodes with available pods: 2 May 24 12:31:16.801: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 24 12:31:16.878: INFO: Wrong image for pod: daemon-set-7hfts. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:16.878: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:16.890: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:17.896: INFO: Wrong image for pod: daemon-set-7hfts. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:17.896: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:17.900: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:18.894: INFO: Wrong image for pod: daemon-set-7hfts. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:18.894: INFO: Pod daemon-set-7hfts is not available May 24 12:31:18.894: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:18.899: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:19.894: INFO: Pod daemon-set-4wlcr is not available May 24 12:31:19.895: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:19.906: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:20.895: INFO: Pod daemon-set-4wlcr is not available May 24 12:31:20.895: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:20.899: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:21.896: INFO: Pod daemon-set-4wlcr is not available May 24 12:31:21.896: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:21.901: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:22.910: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:22.914: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:23.894: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:23.898: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:24.895: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:24.895: INFO: Pod daemon-set-tfh88 is not available May 24 12:31:24.899: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:25.895: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:25.895: INFO: Pod daemon-set-tfh88 is not available May 24 12:31:25.900: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:26.895: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:26.895: INFO: Pod daemon-set-tfh88 is not available May 24 12:31:26.899: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:27.895: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:27.895: INFO: Pod daemon-set-tfh88 is not available May 24 12:31:27.901: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:28.895: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:28.895: INFO: Pod daemon-set-tfh88 is not available May 24 12:31:28.899: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:29.895: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:29.895: INFO: Pod daemon-set-tfh88 is not available May 24 12:31:29.899: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:30.895: INFO: Wrong image for pod: daemon-set-tfh88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 24 12:31:30.895: INFO: Pod daemon-set-tfh88 is not available May 24 12:31:30.898: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:31.898: INFO: Pod daemon-set-t9xc5 is not available May 24 12:31:31.901: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 24 12:31:31.905: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:31.908: INFO: Number of nodes with available pods: 1 May 24 12:31:31.908: INFO: Node hunter-worker2 is running more than one daemon pod May 24 12:31:32.913: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:32.915: INFO: Number of nodes with available pods: 1 May 24 12:31:32.915: INFO: Node hunter-worker2 is running more than one daemon pod May 24 12:31:33.914: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:33.917: INFO: Number of nodes with available pods: 1 May 24 12:31:33.917: INFO: Node hunter-worker2 is running more than one daemon pod May 24 12:31:34.913: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 12:31:34.915: INFO: Number of nodes with available pods: 2 May 24 12:31:34.915: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tlhdj, will wait for the garbage collector to delete the pods May 24 12:31:34.988: INFO: Deleting DaemonSet.extensions daemon-set took: 6.841073ms May 24 12:31:35.088: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.221331ms May 24 12:31:41.792: INFO: Number of nodes with available pods: 0 May 24 12:31:41.792: INFO: Number of running nodes: 0, number of available pods: 0 May 24 12:31:41.795: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tlhdj/daemonsets","resourceVersion":"12275362"},"items":null} May 24 12:31:41.797: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tlhdj/pods","resourceVersion":"12275362"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:31:41.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-tlhdj" for this suite. May 24 12:31:47.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:31:47.865: INFO: namespace: e2e-tests-daemonsets-tlhdj, resource: bindings, ignored listing per whitelist May 24 12:31:47.930: INFO: namespace e2e-tests-daemonsets-tlhdj deletion completed in 6.096795357s • [SLOW TEST:36.341 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:31:47.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 24 12:31:48.112: INFO: Waiting up to 5m0s for pod "pod-8947c89f-9dba-11ea-9618-0242ac110016" in namespace "e2e-tests-emptydir-f8mlf" to be "success or failure" May 24 12:31:48.130: INFO: Pod "pod-8947c89f-9dba-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 18.012107ms May 24 12:31:50.134: INFO: Pod "pod-8947c89f-9dba-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02205596s May 24 12:31:52.138: INFO: Pod "pod-8947c89f-9dba-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026503744s STEP: Saw pod success May 24 12:31:52.138: INFO: Pod "pod-8947c89f-9dba-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:31:52.142: INFO: Trying to get logs from node hunter-worker pod pod-8947c89f-9dba-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 12:31:52.160: INFO: Waiting for pod pod-8947c89f-9dba-11ea-9618-0242ac110016 to disappear May 24 12:31:52.170: INFO: Pod pod-8947c89f-9dba-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:31:52.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-f8mlf" for this suite. May 24 12:31:58.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:31:58.229: INFO: namespace: e2e-tests-emptydir-f8mlf, resource: bindings, ignored listing per whitelist May 24 12:31:58.263: INFO: namespace e2e-tests-emptydir-f8mlf deletion completed in 6.089555649s • [SLOW TEST:10.333 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:31:58.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 24 12:32:02.901: INFO: Successfully updated pod "annotationupdate8f678184-9dba-11ea-9618-0242ac110016" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:32:04.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kj2qv" for this suite. May 24 12:32:28.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:32:28.964: INFO: namespace: e2e-tests-projected-kj2qv, resource: bindings, ignored listing per whitelist May 24 12:32:29.025: INFO: namespace e2e-tests-projected-kj2qv deletion completed in 24.087961833s • [SLOW TEST:30.762 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:32:29.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-v6qrc STEP: creating a selector STEP: Creating the service pods in kubernetes May 24 12:32:29.184: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 24 12:32:57.360: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.147:8080/dial?request=hostName&protocol=udp&host=10.244.2.131&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-v6qrc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 12:32:57.361: INFO: >>> kubeConfig: /root/.kube/config I0524 12:32:57.392092 6 log.go:172] (0xc0006f34a0) (0xc0022b6500) Create stream I0524 12:32:57.392116 6 log.go:172] (0xc0006f34a0) (0xc0022b6500) Stream added, broadcasting: 1 I0524 12:32:57.394521 6 log.go:172] (0xc0006f34a0) Reply frame received for 1 I0524 12:32:57.394562 6 log.go:172] (0xc0006f34a0) (0xc00286a000) Create stream I0524 12:32:57.394577 6 log.go:172] (0xc0006f34a0) (0xc00286a000) Stream added, broadcasting: 3 I0524 12:32:57.395464 6 log.go:172] (0xc0006f34a0) Reply frame received for 3 I0524 12:32:57.395513 6 log.go:172] (0xc0006f34a0) (0xc0022b65a0) Create stream I0524 12:32:57.395529 6 log.go:172] (0xc0006f34a0) (0xc0022b65a0) Stream added, broadcasting: 5 I0524 12:32:57.396409 6 log.go:172] (0xc0006f34a0) Reply frame received for 5 I0524 12:32:57.525632 6 log.go:172] (0xc0006f34a0) Data frame received for 3 I0524 12:32:57.525662 6 log.go:172] (0xc00286a000) (3) Data frame handling I0524 12:32:57.525684 6 log.go:172] (0xc00286a000) (3) Data frame sent I0524 12:32:57.526456 6 log.go:172] (0xc0006f34a0) Data frame received for 5 I0524 12:32:57.526473 6 log.go:172] (0xc0022b65a0) (5) Data frame handling I0524 12:32:57.526493 6 log.go:172] (0xc0006f34a0) Data frame received for 3 I0524 12:32:57.526502 6 log.go:172] (0xc00286a000) (3) Data frame handling I0524 12:32:57.528094 6 log.go:172] (0xc0006f34a0) Data frame received for 1 I0524 12:32:57.528120 6 log.go:172] (0xc0022b6500) (1) Data frame handling I0524 12:32:57.528148 6 log.go:172] (0xc0022b6500) (1) Data frame sent I0524 12:32:57.528175 6 log.go:172] (0xc0006f34a0) (0xc0022b6500) Stream removed, broadcasting: 1 I0524 12:32:57.528195 6 log.go:172] (0xc0006f34a0) Go away received I0524 12:32:57.528313 6 log.go:172] (0xc0006f34a0) (0xc0022b6500) Stream removed, broadcasting: 1 I0524 12:32:57.528332 6 log.go:172] (0xc0006f34a0) (0xc00286a000) Stream removed, broadcasting: 3 I0524 12:32:57.528348 6 log.go:172] (0xc0006f34a0) (0xc0022b65a0) Stream removed, broadcasting: 5 May 24 12:32:57.528: INFO: Waiting for endpoints: map[] May 24 12:32:57.531: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.147:8080/dial?request=hostName&protocol=udp&host=10.244.1.146&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-v6qrc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 12:32:57.531: INFO: >>> kubeConfig: /root/.kube/config I0524 12:32:57.564973 6 log.go:172] (0xc0000ebc30) (0xc00286a460) Create stream I0524 12:32:57.564996 6 log.go:172] (0xc0000ebc30) (0xc00286a460) Stream added, broadcasting: 1 I0524 12:32:57.567441 6 log.go:172] (0xc0000ebc30) Reply frame received for 1 I0524 12:32:57.567471 6 log.go:172] (0xc0000ebc30) (0xc001b520a0) Create stream I0524 12:32:57.567481 6 log.go:172] (0xc0000ebc30) (0xc001b520a0) Stream added, broadcasting: 3 I0524 12:32:57.568383 6 log.go:172] (0xc0000ebc30) Reply frame received for 3 I0524 12:32:57.568413 6 log.go:172] (0xc0000ebc30) (0xc0022b6640) Create stream I0524 12:32:57.568423 6 log.go:172] (0xc0000ebc30) (0xc0022b6640) Stream added, broadcasting: 5 I0524 12:32:57.569365 6 log.go:172] (0xc0000ebc30) Reply frame received for 5 I0524 12:32:57.648922 6 log.go:172] (0xc0000ebc30) Data frame received for 3 I0524 12:32:57.648951 6 log.go:172] (0xc001b520a0) (3) Data frame handling I0524 12:32:57.648966 6 log.go:172] (0xc001b520a0) (3) Data frame sent I0524 12:32:57.649829 6 log.go:172] (0xc0000ebc30) Data frame received for 3 I0524 12:32:57.649854 6 log.go:172] (0xc001b520a0) (3) Data frame handling I0524 12:32:57.649904 6 log.go:172] (0xc0000ebc30) Data frame received for 5 I0524 12:32:57.649980 6 log.go:172] (0xc0022b6640) (5) Data frame handling I0524 12:32:57.651432 6 log.go:172] (0xc0000ebc30) Data frame received for 1 I0524 12:32:57.651453 6 log.go:172] (0xc00286a460) (1) Data frame handling I0524 12:32:57.651467 6 log.go:172] (0xc00286a460) (1) Data frame sent I0524 12:32:57.651475 6 log.go:172] (0xc0000ebc30) (0xc00286a460) Stream removed, broadcasting: 1 I0524 12:32:57.651484 6 log.go:172] (0xc0000ebc30) Go away received I0524 12:32:57.651649 6 log.go:172] (0xc0000ebc30) (0xc00286a460) Stream removed, broadcasting: 1 I0524 12:32:57.651672 6 log.go:172] (0xc0000ebc30) (0xc001b520a0) Stream removed, broadcasting: 3 I0524 12:32:57.651681 6 log.go:172] (0xc0000ebc30) (0xc0022b6640) Stream removed, broadcasting: 5 May 24 12:32:57.651: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:32:57.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-v6qrc" for this suite. May 24 12:33:21.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:33:21.714: INFO: namespace: e2e-tests-pod-network-test-v6qrc, resource: bindings, ignored listing per whitelist May 24 12:33:21.754: INFO: namespace e2e-tests-pod-network-test-v6qrc deletion completed in 24.098300557s • [SLOW TEST:52.728 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:33:21.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-c129dbd7-9dba-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 12:33:21.894: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c12ca445-9dba-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-82w6s" to be "success or failure" May 24 12:33:21.898: INFO: Pod "pod-projected-configmaps-c12ca445-9dba-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.747612ms May 24 12:33:23.931: INFO: Pod "pod-projected-configmaps-c12ca445-9dba-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036369016s May 24 12:33:25.935: INFO: Pod "pod-projected-configmaps-c12ca445-9dba-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04097121s STEP: Saw pod success May 24 12:33:25.935: INFO: Pod "pod-projected-configmaps-c12ca445-9dba-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:33:25.939: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-c12ca445-9dba-11ea-9618-0242ac110016 container projected-configmap-volume-test: STEP: delete the pod May 24 12:33:26.015: INFO: Waiting for pod pod-projected-configmaps-c12ca445-9dba-11ea-9618-0242ac110016 to disappear May 24 12:33:26.024: INFO: Pod pod-projected-configmaps-c12ca445-9dba-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:33:26.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-82w6s" for this suite. May 24 12:33:32.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:33:32.086: INFO: namespace: e2e-tests-projected-82w6s, resource: bindings, ignored listing per whitelist May 24 12:33:32.118: INFO: namespace e2e-tests-projected-82w6s deletion completed in 6.091068388s • [SLOW TEST:10.364 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:33:32.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-9l8f STEP: Creating a pod to test atomic-volume-subpath May 24 12:33:32.272: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9l8f" in namespace "e2e-tests-subpath-4l859" to be "success or failure" May 24 12:33:32.290: INFO: Pod "pod-subpath-test-downwardapi-9l8f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.335048ms May 24 12:33:34.439: INFO: Pod "pod-subpath-test-downwardapi-9l8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16690142s May 24 12:33:36.505: INFO: Pod "pod-subpath-test-downwardapi-9l8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232736233s May 24 12:33:38.509: INFO: Pod "pod-subpath-test-downwardapi-9l8f": Phase="Running", Reason="", readiness=true. Elapsed: 6.237150671s May 24 12:33:40.514: INFO: Pod "pod-subpath-test-downwardapi-9l8f": Phase="Running", Reason="", readiness=false. Elapsed: 8.241731011s May 24 12:33:42.518: INFO: Pod "pod-subpath-test-downwardapi-9l8f": Phase="Running", Reason="", readiness=false. Elapsed: 10.246123055s May 24 12:33:44.523: INFO: Pod "pod-subpath-test-downwardapi-9l8f": Phase="Running", Reason="", readiness=false. Elapsed: 12.25080949s May 24 12:33:46.527: INFO: Pod "pod-subpath-test-downwardapi-9l8f": Phase="Running", Reason="", readiness=false. Elapsed: 14.255077744s May 24 12:33:48.532: INFO: Pod "pod-subpath-test-downwardapi-9l8f": Phase="Running", Reason="", readiness=false. Elapsed: 16.259271405s May 24 12:33:50.536: INFO: Pod "pod-subpath-test-downwardapi-9l8f": Phase="Running", Reason="", readiness=false. Elapsed: 18.263792409s May 24 12:33:52.540: INFO: Pod "pod-subpath-test-downwardapi-9l8f": Phase="Running", Reason="", readiness=false. Elapsed: 20.267672781s May 24 12:33:54.545: INFO: Pod "pod-subpath-test-downwardapi-9l8f": Phase="Running", Reason="", readiness=false. Elapsed: 22.272571083s May 24 12:33:56.549: INFO: Pod "pod-subpath-test-downwardapi-9l8f": Phase="Running", Reason="", readiness=false. Elapsed: 24.277000885s May 24 12:33:58.553: INFO: Pod "pod-subpath-test-downwardapi-9l8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.280941978s STEP: Saw pod success May 24 12:33:58.553: INFO: Pod "pod-subpath-test-downwardapi-9l8f" satisfied condition "success or failure" May 24 12:33:58.557: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-9l8f container test-container-subpath-downwardapi-9l8f: STEP: delete the pod May 24 12:33:58.595: INFO: Waiting for pod pod-subpath-test-downwardapi-9l8f to disappear May 24 12:33:58.642: INFO: Pod pod-subpath-test-downwardapi-9l8f no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-9l8f May 24 12:33:58.642: INFO: Deleting pod "pod-subpath-test-downwardapi-9l8f" in namespace "e2e-tests-subpath-4l859" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:33:58.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-4l859" for this suite. May 24 12:34:04.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:34:04.736: INFO: namespace: e2e-tests-subpath-4l859, resource: bindings, ignored listing per whitelist May 24 12:34:04.762: INFO: namespace e2e-tests-subpath-4l859 deletion completed in 6.113756396s • [SLOW TEST:32.644 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:34:04.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 24 12:34:12.916: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:12.938: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:14.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:14.941: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:16.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:16.942: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:18.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:18.942: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:20.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:20.942: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:22.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:22.942: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:24.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:24.954: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:26.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:26.948: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:28.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:28.943: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:30.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:30.942: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:32.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:32.943: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:34.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:34.942: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:36.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:36.943: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:38.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:38.943: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:40.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:40.942: INFO: Pod pod-with-poststart-exec-hook still exists May 24 12:34:42.938: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 12:34:42.943: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:34:42.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-lr5rw" for this suite. May 24 12:35:04.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:35:04.994: INFO: namespace: e2e-tests-container-lifecycle-hook-lr5rw, resource: bindings, ignored listing per whitelist May 24 12:35:05.052: INFO: namespace e2e-tests-container-lifecycle-hook-lr5rw deletion completed in 22.105553138s • [SLOW TEST:60.290 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:35:05.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 24 12:35:09.725: INFO: Successfully updated pod "pod-update-febed6c9-9dba-11ea-9618-0242ac110016" STEP: verifying the updated pod is in kubernetes May 24 12:35:09.748: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:35:09.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vwk4v" for this suite. May 24 12:35:31.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:35:31.866: INFO: namespace: e2e-tests-pods-vwk4v, resource: bindings, ignored listing per whitelist May 24 12:35:31.872: INFO: namespace e2e-tests-pods-vwk4v deletion completed in 22.120442437s • [SLOW TEST:26.819 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:35:31.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 24 12:35:32.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ebceff2-9dbb-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-v8whf" to be "success or failure" May 24 12:35:32.009: INFO: Pod "downwardapi-volume-0ebceff2-9dbb-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.005878ms May 24 12:35:34.013: INFO: Pod "downwardapi-volume-0ebceff2-9dbb-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007183789s May 24 12:35:36.018: INFO: Pod "downwardapi-volume-0ebceff2-9dbb-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011766405s STEP: Saw pod success May 24 12:35:36.018: INFO: Pod "downwardapi-volume-0ebceff2-9dbb-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:35:36.021: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0ebceff2-9dbb-11ea-9618-0242ac110016 container client-container: STEP: delete the pod May 24 12:35:36.040: INFO: Waiting for pod downwardapi-volume-0ebceff2-9dbb-11ea-9618-0242ac110016 to disappear May 24 12:35:36.141: INFO: Pod downwardapi-volume-0ebceff2-9dbb-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:35:36.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-v8whf" for this suite. May 24 12:35:42.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:35:42.207: INFO: namespace: e2e-tests-projected-v8whf, resource: bindings, ignored listing per whitelist May 24 12:35:42.245: INFO: namespace e2e-tests-projected-v8whf deletion completed in 6.099294219s • [SLOW TEST:10.373 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:35:42.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 24 12:35:42.364: INFO: Waiting up to 5m0s for pod "downward-api-14eaab29-9dbb-11ea-9618-0242ac110016" in namespace "e2e-tests-downward-api-ffdqr" to be "success or failure" May 24 12:35:42.379: INFO: Pod "downward-api-14eaab29-9dbb-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 14.652274ms May 24 12:35:44.384: INFO: Pod "downward-api-14eaab29-9dbb-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01926932s May 24 12:35:46.387: INFO: Pod "downward-api-14eaab29-9dbb-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022966181s STEP: Saw pod success May 24 12:35:46.387: INFO: Pod "downward-api-14eaab29-9dbb-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:35:46.391: INFO: Trying to get logs from node hunter-worker2 pod downward-api-14eaab29-9dbb-11ea-9618-0242ac110016 container dapi-container: STEP: delete the pod May 24 12:35:46.490: INFO: Waiting for pod downward-api-14eaab29-9dbb-11ea-9618-0242ac110016 to disappear May 24 12:35:46.503: INFO: Pod downward-api-14eaab29-9dbb-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:35:46.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ffdqr" for this suite. May 24 12:35:52.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:35:52.571: INFO: namespace: e2e-tests-downward-api-ffdqr, resource: bindings, ignored listing per whitelist May 24 12:35:52.614: INFO: namespace e2e-tests-downward-api-ffdqr deletion completed in 6.107169684s • [SLOW TEST:10.368 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:35:52.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-bkd8 STEP: Creating a pod to test atomic-volume-subpath May 24 12:35:52.773: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-bkd8" in namespace "e2e-tests-subpath-rnjl5" to be "success or failure" May 24 12:35:52.795: INFO: Pod "pod-subpath-test-projected-bkd8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.171289ms May 24 12:35:54.799: INFO: Pod "pod-subpath-test-projected-bkd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026339305s May 24 12:35:56.804: INFO: Pod "pod-subpath-test-projected-bkd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030867112s May 24 12:35:58.809: INFO: Pod "pod-subpath-test-projected-bkd8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036132133s May 24 12:36:00.814: INFO: Pod "pod-subpath-test-projected-bkd8": Phase="Running", Reason="", readiness=false. Elapsed: 8.040679903s May 24 12:36:02.818: INFO: Pod "pod-subpath-test-projected-bkd8": Phase="Running", Reason="", readiness=false. Elapsed: 10.045314983s May 24 12:36:04.822: INFO: Pod "pod-subpath-test-projected-bkd8": Phase="Running", Reason="", readiness=false. Elapsed: 12.049253896s May 24 12:36:06.826: INFO: Pod "pod-subpath-test-projected-bkd8": Phase="Running", Reason="", readiness=false. Elapsed: 14.053086293s May 24 12:36:08.830: INFO: Pod "pod-subpath-test-projected-bkd8": Phase="Running", Reason="", readiness=false. Elapsed: 16.057268049s May 24 12:36:10.834: INFO: Pod "pod-subpath-test-projected-bkd8": Phase="Running", Reason="", readiness=false. Elapsed: 18.061158745s May 24 12:36:12.839: INFO: Pod "pod-subpath-test-projected-bkd8": Phase="Running", Reason="", readiness=false. Elapsed: 20.065834249s May 24 12:36:14.843: INFO: Pod "pod-subpath-test-projected-bkd8": Phase="Running", Reason="", readiness=false. Elapsed: 22.069842508s May 24 12:36:16.847: INFO: Pod "pod-subpath-test-projected-bkd8": Phase="Running", Reason="", readiness=false. Elapsed: 24.073798094s May 24 12:36:18.851: INFO: Pod "pod-subpath-test-projected-bkd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.077718951s STEP: Saw pod success May 24 12:36:18.851: INFO: Pod "pod-subpath-test-projected-bkd8" satisfied condition "success or failure" May 24 12:36:18.854: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-bkd8 container test-container-subpath-projected-bkd8: STEP: delete the pod May 24 12:36:18.874: INFO: Waiting for pod pod-subpath-test-projected-bkd8 to disappear May 24 12:36:18.920: INFO: Pod pod-subpath-test-projected-bkd8 no longer exists STEP: Deleting pod pod-subpath-test-projected-bkd8 May 24 12:36:18.920: INFO: Deleting pod "pod-subpath-test-projected-bkd8" in namespace "e2e-tests-subpath-rnjl5" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:36:18.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-rnjl5" for this suite. May 24 12:36:24.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:36:24.964: INFO: namespace: e2e-tests-subpath-rnjl5, resource: bindings, ignored listing per whitelist May 24 12:36:25.010: INFO: namespace e2e-tests-subpath-rnjl5 deletion completed in 6.08140332s • [SLOW TEST:32.396 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:36:25.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 24 12:36:25.119: INFO: Waiting up to 5m0s for pod "client-containers-2e63cbb4-9dbb-11ea-9618-0242ac110016" in namespace "e2e-tests-containers-sk6gj" to be "success or failure" May 24 12:36:25.168: INFO: Pod "client-containers-2e63cbb4-9dbb-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 48.940251ms May 24 12:36:27.172: INFO: Pod "client-containers-2e63cbb4-9dbb-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053070229s May 24 12:36:29.177: INFO: Pod "client-containers-2e63cbb4-9dbb-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057642363s STEP: Saw pod success May 24 12:36:29.177: INFO: Pod "client-containers-2e63cbb4-9dbb-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:36:29.180: INFO: Trying to get logs from node hunter-worker pod client-containers-2e63cbb4-9dbb-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 12:36:29.217: INFO: Waiting for pod client-containers-2e63cbb4-9dbb-11ea-9618-0242ac110016 to disappear May 24 12:36:29.256: INFO: Pod client-containers-2e63cbb4-9dbb-11ea-9618-0242ac110016 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:36:29.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-sk6gj" for this suite. May 24 12:36:35.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:36:35.367: INFO: namespace: e2e-tests-containers-sk6gj, resource: bindings, ignored listing per whitelist May 24 12:36:35.372: INFO: namespace e2e-tests-containers-sk6gj deletion completed in 6.112597093s • [SLOW TEST:10.362 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:36:35.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:36:35.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-pk98c" for this suite. May 24 12:36:41.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:36:41.693: INFO: namespace: e2e-tests-kubelet-test-pk98c, resource: bindings, ignored listing per whitelist May 24 12:36:41.826: INFO: namespace e2e-tests-kubelet-test-pk98c deletion completed in 6.191069284s • [SLOW TEST:6.453 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:36:41.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 24 12:36:41.953: INFO: Waiting up to 5m0s for pod "pod-386d8d08-9dbb-11ea-9618-0242ac110016" in namespace "e2e-tests-emptydir-q5trb" to be "success or failure" May 24 12:36:41.957: INFO: Pod "pod-386d8d08-9dbb-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.606625ms May 24 12:36:43.960: INFO: Pod "pod-386d8d08-9dbb-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006876394s May 24 12:36:45.965: INFO: Pod "pod-386d8d08-9dbb-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011607276s STEP: Saw pod success May 24 12:36:45.965: INFO: Pod "pod-386d8d08-9dbb-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:36:45.968: INFO: Trying to get logs from node hunter-worker pod pod-386d8d08-9dbb-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 12:36:46.007: INFO: Waiting for pod pod-386d8d08-9dbb-11ea-9618-0242ac110016 to disappear May 24 12:36:46.064: INFO: Pod pod-386d8d08-9dbb-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:36:46.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-q5trb" for this suite. May 24 12:36:52.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:36:52.135: INFO: namespace: e2e-tests-emptydir-q5trb, resource: bindings, ignored listing per whitelist May 24 12:36:52.178: INFO: namespace e2e-tests-emptydir-q5trb deletion completed in 6.109552832s • [SLOW TEST:10.352 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:36:52.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 24 12:36:52.340: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 24 12:36:52.346: INFO: Number of nodes with available pods: 0 May 24 12:36:52.346: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 24 12:36:52.412: INFO: Number of nodes with available pods: 0 May 24 12:36:52.412: INFO: Node hunter-worker is running more than one daemon pod May 24 12:36:53.442: INFO: Number of nodes with available pods: 0 May 24 12:36:53.442: INFO: Node hunter-worker is running more than one daemon pod May 24 12:36:54.416: INFO: Number of nodes with available pods: 0 May 24 12:36:54.416: INFO: Node hunter-worker is running more than one daemon pod May 24 12:36:55.454: INFO: Number of nodes with available pods: 1 May 24 12:36:55.454: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 24 12:36:55.486: INFO: Number of nodes with available pods: 1 May 24 12:36:55.486: INFO: Number of running nodes: 0, number of available pods: 1 May 24 12:36:56.490: INFO: Number of nodes with available pods: 0 May 24 12:36:56.490: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 24 12:36:56.503: INFO: Number of nodes with available pods: 0 May 24 12:36:56.503: INFO: Node hunter-worker is running more than one daemon pod May 24 12:36:57.506: INFO: Number of nodes with available pods: 0 May 24 12:36:57.506: INFO: Node hunter-worker is running more than one daemon pod May 24 12:36:58.507: INFO: Number of nodes with available pods: 0 May 24 12:36:58.507: INFO: Node hunter-worker is running more than one daemon pod May 24 12:36:59.507: INFO: Number of nodes with available pods: 0 May 24 12:36:59.507: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:00.507: INFO: Number of nodes with available pods: 0 May 24 12:37:00.507: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:01.507: INFO: Number of nodes with available pods: 0 May 24 12:37:01.507: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:02.508: INFO: Number of nodes with available pods: 0 May 24 12:37:02.508: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:03.507: INFO: Number of nodes with available pods: 0 May 24 12:37:03.507: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:04.507: INFO: Number of nodes with available pods: 0 May 24 12:37:04.507: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:05.507: INFO: Number of nodes with available pods: 0 May 24 12:37:05.507: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:06.507: INFO: Number of nodes with available pods: 0 May 24 12:37:06.507: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:07.507: INFO: Number of nodes with available pods: 0 May 24 12:37:07.507: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:08.507: INFO: Number of nodes with available pods: 0 May 24 12:37:08.507: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:09.507: INFO: Number of nodes with available pods: 0 May 24 12:37:09.507: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:10.507: INFO: Number of nodes with available pods: 0 May 24 12:37:10.507: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:11.507: INFO: Number of nodes with available pods: 0 May 24 12:37:11.507: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:12.507: INFO: Number of nodes with available pods: 0 May 24 12:37:12.507: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:13.514: INFO: Number of nodes with available pods: 0 May 24 12:37:13.514: INFO: Node hunter-worker is running more than one daemon pod May 24 12:37:14.507: INFO: Number of nodes with available pods: 1 May 24 12:37:14.507: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tk82p, will wait for the garbage collector to delete the pods May 24 12:37:14.571: INFO: Deleting DaemonSet.extensions daemon-set took: 5.084447ms May 24 12:37:14.671: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.248721ms May 24 12:37:21.316: INFO: Number of nodes with available pods: 0 May 24 12:37:21.316: INFO: Number of running nodes: 0, number of available pods: 0 May 24 12:37:21.319: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tk82p/daemonsets","resourceVersion":"12276497"},"items":null} May 24 12:37:21.322: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tk82p/pods","resourceVersion":"12276497"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:37:21.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-tk82p" for this suite. May 24 12:37:27.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:37:27.428: INFO: namespace: e2e-tests-daemonsets-tk82p, resource: bindings, ignored listing per whitelist May 24 12:37:27.446: INFO: namespace e2e-tests-daemonsets-tk82p deletion completed in 6.084812014s • [SLOW TEST:35.268 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:37:27.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 24 12:37:27.590: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix751682398/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:37:27.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-52wqm" for this suite. May 24 12:37:33.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:37:33.764: INFO: namespace: e2e-tests-kubectl-52wqm, resource: bindings, ignored listing per whitelist May 24 12:37:33.835: INFO: namespace e2e-tests-kubectl-52wqm deletion completed in 6.169833056s • [SLOW TEST:6.389 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:37:33.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-576860d0-9dbb-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 12:37:33.955: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-576c80ea-9dbb-11ea-9618-0242ac110016" in namespace "e2e-tests-projected-92x6m" to be "success or failure" May 24 12:37:33.959: INFO: Pod "pod-projected-secrets-576c80ea-9dbb-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 3.687948ms May 24 12:37:35.963: INFO: Pod "pod-projected-secrets-576c80ea-9dbb-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008251145s May 24 12:37:37.967: INFO: Pod "pod-projected-secrets-576c80ea-9dbb-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01240669s STEP: Saw pod success May 24 12:37:37.967: INFO: Pod "pod-projected-secrets-576c80ea-9dbb-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:37:37.971: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-576c80ea-9dbb-11ea-9618-0242ac110016 container projected-secret-volume-test: STEP: delete the pod May 24 12:37:38.003: INFO: Waiting for pod pod-projected-secrets-576c80ea-9dbb-11ea-9618-0242ac110016 to disappear May 24 12:37:38.007: INFO: Pod pod-projected-secrets-576c80ea-9dbb-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:37:38.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-92x6m" for this suite. May 24 12:37:44.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:37:44.153: INFO: namespace: e2e-tests-projected-92x6m, resource: bindings, ignored listing per whitelist May 24 12:37:44.175: INFO: namespace e2e-tests-projected-92x6m deletion completed in 6.16536032s • [SLOW TEST:10.340 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:37:44.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-5d9d78bd-9dbb-11ea-9618-0242ac110016 STEP: Creating a pod to test consume configMaps May 24 12:37:44.350: INFO: Waiting up to 5m0s for pod "pod-configmaps-5d9fe376-9dbb-11ea-9618-0242ac110016" in namespace "e2e-tests-configmap-jnkvl" to be "success or failure" May 24 12:37:44.395: INFO: Pod "pod-configmaps-5d9fe376-9dbb-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 44.818924ms May 24 12:37:46.424: INFO: Pod "pod-configmaps-5d9fe376-9dbb-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074026648s May 24 12:37:48.427: INFO: Pod "pod-configmaps-5d9fe376-9dbb-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07691572s STEP: Saw pod success May 24 12:37:48.427: INFO: Pod "pod-configmaps-5d9fe376-9dbb-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:37:48.429: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-5d9fe376-9dbb-11ea-9618-0242ac110016 container configmap-volume-test: STEP: delete the pod May 24 12:37:48.518: INFO: Waiting for pod pod-configmaps-5d9fe376-9dbb-11ea-9618-0242ac110016 to disappear May 24 12:37:48.546: INFO: Pod pod-configmaps-5d9fe376-9dbb-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:37:48.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jnkvl" for this suite. May 24 12:37:54.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:37:54.604: INFO: namespace: e2e-tests-configmap-jnkvl, resource: bindings, ignored listing per whitelist May 24 12:37:54.651: INFO: namespace e2e-tests-configmap-jnkvl deletion completed in 6.100699682s • [SLOW TEST:10.475 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:37:54.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 24 12:37:54.764: INFO: Waiting up to 5m0s for pod "pod-63d49fbe-9dbb-11ea-9618-0242ac110016" in namespace "e2e-tests-emptydir-5rvfw" to be "success or failure" May 24 12:37:54.780: INFO: Pod "pod-63d49fbe-9dbb-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 16.170674ms May 24 12:37:56.783: INFO: Pod "pod-63d49fbe-9dbb-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01973715s May 24 12:37:58.787: INFO: Pod "pod-63d49fbe-9dbb-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023736872s STEP: Saw pod success May 24 12:37:58.787: INFO: Pod "pod-63d49fbe-9dbb-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:37:58.790: INFO: Trying to get logs from node hunter-worker2 pod pod-63d49fbe-9dbb-11ea-9618-0242ac110016 container test-container: STEP: delete the pod May 24 12:37:58.857: INFO: Waiting for pod pod-63d49fbe-9dbb-11ea-9618-0242ac110016 to disappear May 24 12:37:58.864: INFO: Pod pod-63d49fbe-9dbb-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:37:58.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5rvfw" for this suite. May 24 12:38:04.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:38:04.934: INFO: namespace: e2e-tests-emptydir-5rvfw, resource: bindings, ignored listing per whitelist May 24 12:38:04.966: INFO: namespace e2e-tests-emptydir-5rvfw deletion completed in 6.098423028s • [SLOW TEST:10.314 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:38:04.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0524 12:38:17.019682 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 12:38:17.019: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:38:17.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-7mxt6" for this suite. May 24 12:38:25.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:38:25.050: INFO: namespace: e2e-tests-gc-7mxt6, resource: bindings, ignored listing per whitelist May 24 12:38:25.107: INFO: namespace e2e-tests-gc-7mxt6 deletion completed in 8.084582435s • [SLOW TEST:20.141 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:38:25.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 24 12:38:25.226: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-8pz6h" to be "success or failure" May 24 12:38:25.251: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 25.676245ms May 24 12:38:27.269: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043609786s May 24 12:38:29.273: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047761787s May 24 12:38:31.278: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052113365s STEP: Saw pod success May 24 12:38:31.278: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 24 12:38:31.281: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 24 12:38:31.325: INFO: Waiting for pod pod-host-path-test to disappear May 24 12:38:31.333: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:38:31.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-8pz6h" for this suite. May 24 12:38:37.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:38:37.423: INFO: namespace: e2e-tests-hostpath-8pz6h, resource: bindings, ignored listing per whitelist May 24 12:38:37.446: INFO: namespace e2e-tests-hostpath-8pz6h deletion completed in 6.109515262s • [SLOW TEST:12.338 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:38:37.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-56trw May 24 12:38:41.577: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-56trw STEP: checking the pod's current state and verifying that restartCount is present May 24 12:38:41.580: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:42:42.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-56trw" for this suite. May 24 12:42:48.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:42:48.224: INFO: namespace: e2e-tests-container-probe-56trw, resource: bindings, ignored listing per whitelist May 24 12:42:48.286: INFO: namespace e2e-tests-container-probe-56trw deletion completed in 6.084070803s • [SLOW TEST:250.841 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:42:48.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-12de3280-9dbc-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 12:42:48.440: INFO: Waiting up to 5m0s for pod "pod-secrets-12e055f9-9dbc-11ea-9618-0242ac110016" in namespace "e2e-tests-secrets-ddwg8" to be "success or failure" May 24 12:42:48.444: INFO: Pod "pod-secrets-12e055f9-9dbc-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137269ms May 24 12:42:50.449: INFO: Pod "pod-secrets-12e055f9-9dbc-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008342052s May 24 12:42:52.454: INFO: Pod "pod-secrets-12e055f9-9dbc-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013858647s STEP: Saw pod success May 24 12:42:52.454: INFO: Pod "pod-secrets-12e055f9-9dbc-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:42:52.458: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-12e055f9-9dbc-11ea-9618-0242ac110016 container secret-volume-test: STEP: delete the pod May 24 12:42:52.482: INFO: Waiting for pod pod-secrets-12e055f9-9dbc-11ea-9618-0242ac110016 to disappear May 24 12:42:52.486: INFO: Pod pod-secrets-12e055f9-9dbc-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:42:52.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ddwg8" for this suite. May 24 12:42:58.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:42:58.576: INFO: namespace: e2e-tests-secrets-ddwg8, resource: bindings, ignored listing per whitelist May 24 12:42:58.581: INFO: namespace e2e-tests-secrets-ddwg8 deletion completed in 6.092255812s • [SLOW TEST:10.295 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:42:58.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-18f94b46-9dbc-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 12:42:58.691: INFO: Waiting up to 5m0s for pod "pod-secrets-18f9dce7-9dbc-11ea-9618-0242ac110016" in namespace "e2e-tests-secrets-x2f7p" to be "success or failure" May 24 12:42:58.728: INFO: Pod "pod-secrets-18f9dce7-9dbc-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 37.462869ms May 24 12:43:00.776: INFO: Pod "pod-secrets-18f9dce7-9dbc-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085545562s May 24 12:43:02.781: INFO: Pod "pod-secrets-18f9dce7-9dbc-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090254222s STEP: Saw pod success May 24 12:43:02.781: INFO: Pod "pod-secrets-18f9dce7-9dbc-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:43:02.784: INFO: Trying to get logs from node hunter-worker pod pod-secrets-18f9dce7-9dbc-11ea-9618-0242ac110016 container secret-volume-test: STEP: delete the pod May 24 12:43:02.807: INFO: Waiting for pod pod-secrets-18f9dce7-9dbc-11ea-9618-0242ac110016 to disappear May 24 12:43:02.827: INFO: Pod pod-secrets-18f9dce7-9dbc-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:43:02.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-x2f7p" for this suite. May 24 12:43:08.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:43:08.914: INFO: namespace: e2e-tests-secrets-x2f7p, resource: bindings, ignored listing per whitelist May 24 12:43:08.928: INFO: namespace e2e-tests-secrets-x2f7p deletion completed in 6.097756879s • [SLOW TEST:10.347 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:43:08.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-1f282653-9dbc-11ea-9618-0242ac110016 STEP: Creating a pod to test consume secrets May 24 12:43:09.097: INFO: Waiting up to 5m0s for pod "pod-secrets-1f28d9d9-9dbc-11ea-9618-0242ac110016" in namespace "e2e-tests-secrets-w8b77" to be "success or failure" May 24 12:43:09.106: INFO: Pod "pod-secrets-1f28d9d9-9dbc-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 9.111853ms May 24 12:43:11.110: INFO: Pod "pod-secrets-1f28d9d9-9dbc-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013162089s May 24 12:43:13.115: INFO: Pod "pod-secrets-1f28d9d9-9dbc-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017517467s STEP: Saw pod success May 24 12:43:13.115: INFO: Pod "pod-secrets-1f28d9d9-9dbc-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:43:13.118: INFO: Trying to get logs from node hunter-worker pod pod-secrets-1f28d9d9-9dbc-11ea-9618-0242ac110016 container secret-volume-test: STEP: delete the pod May 24 12:43:13.160: INFO: Waiting for pod pod-secrets-1f28d9d9-9dbc-11ea-9618-0242ac110016 to disappear May 24 12:43:13.196: INFO: Pod pod-secrets-1f28d9d9-9dbc-11ea-9618-0242ac110016 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:43:13.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-w8b77" for this suite. May 24 12:43:19.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:43:19.236: INFO: namespace: e2e-tests-secrets-w8b77, resource: bindings, ignored listing per whitelist May 24 12:43:19.317: INFO: namespace e2e-tests-secrets-w8b77 deletion completed in 6.117706001s • [SLOW TEST:10.389 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 24 12:43:19.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 24 12:43:19.441: INFO: Waiting up to 5m0s for pod "var-expansion-255ab64e-9dbc-11ea-9618-0242ac110016" in namespace "e2e-tests-var-expansion-llxxg" to be "success or failure" May 24 12:43:19.463: INFO: Pod "var-expansion-255ab64e-9dbc-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 22.294606ms May 24 12:43:21.468: INFO: Pod "var-expansion-255ab64e-9dbc-11ea-9618-0242ac110016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026719895s May 24 12:43:23.472: INFO: Pod "var-expansion-255ab64e-9dbc-11ea-9618-0242ac110016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030861243s STEP: Saw pod success May 24 12:43:23.472: INFO: Pod "var-expansion-255ab64e-9dbc-11ea-9618-0242ac110016" satisfied condition "success or failure" May 24 12:43:23.476: INFO: Trying to get logs from node hunter-worker pod var-expansion-255ab64e-9dbc-11ea-9618-0242ac110016 container dapi-container: STEP: delete the pod May 24 12:43:23.671: INFO: Waiting for pod var-expansion-255ab64e-9dbc-11ea-9618-0242ac110016 to disappear May 24 12:43:23.813: INFO: Pod var-expansion-255ab64e-9dbc-11ea-9618-0242ac110016 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 24 12:43:23.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-llxxg" for this suite. May 24 12:43:29.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 24 12:43:29.924: INFO: namespace: e2e-tests-var-expansion-llxxg, resource: bindings, ignored listing per whitelist May 24 12:43:29.926: INFO: namespace e2e-tests-var-expansion-llxxg deletion completed in 6.107973694s • [SLOW TEST:10.608 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSMay 24 12:43:29.926: INFO: Running AfterSuite actions on all nodes May 24 12:43:29.926: INFO: Running AfterSuite actions on node 1 May 24 12:43:29.926: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6808.116 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS